id
stringlengths 10
10
| title
stringlengths 19
145
| abstract
stringlengths 273
1.91k
| full_text
dict | qas
dict | figures_and_tables
dict | question
sequence | retrieval_gt
sequence | answer_gt
sequence | __index_level_0__
int64 0
887
|
---|---|---|---|---|---|---|---|---|---|
1912.00667 | A Human-AI Loop Approach for Joint Keyword Discovery and Expectation Estimation in Micropost Event Detection | Microblogging platforms such as Twitter are increasingly being used in event detection. Existing approaches mainly use machine learning models and rely on event-related keywords to collect the data for model training. These approaches make strong assumptions on the distribution of the relevant micro-posts containing the keyword -- referred to as the expectation of the distribution -- and use it as a posterior regularization parameter during model training. Such approaches are, however, limited as they fail to reliably estimate the informativeness of a keyword and its expectation for model training. This paper introduces a Human-AI loop approach to jointly discover informative keywords for model training while estimating their expectation. Our approach iteratively leverages the crowd to estimate both keyword specific expectation and the disagreement between the crowd and the model in order to discover new keywords that are most beneficial for model training. These keywords and their expectation not only improve the resulting performance but also make the model training process more transparent. We empirically demonstrate the merits of our approach, both in terms of accuracy and interpretability, on multiple real-world datasets and show that our approach improves the state of the art by 24.3%. | {
"paragraphs": [
[
"Event detection on microblogging platforms such as Twitter aims to detect events preemptively. A main task in event detection is detecting events of predetermined types BIBREF0, such as concerts or controversial events based on microposts matching specific event descriptions. This task has extensive applications ranging from cyber security BIBREF1, BIBREF2 to political elections BIBREF3 or public health BIBREF4, BIBREF5. Due to the high ambiguity and inconsistency of the terms used in microposts, event detection is generally performed though statistical machine learning models, which require a labeled dataset for model training. Data labeling is, however, a long, laborious, and usually costly process. For the case of micropost classification, though positive labels can be collected (e.g., using specific hashtags, or event-related date-time information), there is no straightforward way to generate negative labels useful for model training. To tackle this lack of negative labels and the significant manual efforts in data labeling, BIBREF1 (BIBREF1, BIBREF3) introduced a weak supervision based learning approach, which uses only positively labeled data, accompanied by unlabeled examples by filtering microposts that contain a certain keyword indicative of the event type under consideration (e.g., `hack' for cyber security). Another key technique in this context is expectation regularization BIBREF6, BIBREF7, BIBREF1. Here, the estimated proportion of relevant microposts in an unlabeled dataset containing a keyword is given as a keyword-specific expectation. This expectation is used in the regularization term of the model's objective function to constrain the posterior distribution of the model predictions. By doing so, the model is trained with an expectation on its prediction for microposts that contain the keyword. Such a method, however, suffers from two key problems:",
"Due to the unpredictability of event occurrences and the constantly changing dynamics of users' posting frequency BIBREF8, estimating the expectation associated with a keyword is a challenging task, even for domain experts;",
"The performance of the event detection model is constrained by the informativeness of the keyword used for model training. As of now, we lack a principled method for discovering new keywords and improve the model performance.",
"To address the above issues, we advocate a human-AI loop approach for discovering informative keywords and estimating their expectations reliably. Our approach iteratively leverages 1) crowd workers for estimating keyword-specific expectations, and 2) the disagreement between the model and the crowd for discovering new informative keywords. More specifically, at each iteration after we obtain a keyword-specific expectation from the crowd, we train the model using expectation regularization and select those keyword-related microposts for which the model's prediction disagrees the most with the crowd's expectation; such microposts are then presented to the crowd to identify new keywords that best explain the disagreement. By doing so, our approach identifies new keywords which convey more relevant information with respect to existing ones, thus effectively boosting model performance. By exploiting the disagreement between the model and the crowd, our approach can make efficient use of the crowd, which is of critical importance in a human-in-the-loop context BIBREF9, BIBREF10. An additional advantage of our approach is that by obtaining new keywords that improve model performance over time, we are able to gain insight into how the model learns for specific event detection tasks. Such an advantage is particularly useful for event detection using complex models, e.g., deep neural networks, which are intrinsically hard to understand BIBREF11, BIBREF12. An additional challenge in involving crowd workers is that their contributions are not fully reliable BIBREF13. In the crowdsourcing literature, this problem is usually tackled with probabilistic latent variable models BIBREF14, BIBREF15, BIBREF16, which are used to perform truth inference by aggregating a redundant set of crowd contributions. Our human-AI loop approach improves the inference of keyword expectation by aggregating contributions not only from the crowd but also from the model. This, however, comes with its own challenge as the model's predictions are further dependent on the results of expectation inference, which is used for model training. To address this problem, we introduce a unified probabilistic model that seamlessly integrates expectation inference and model training, thereby allowing the former to benefit from the latter while resolving the inter-dependency between the two.",
"To the best of our knowledge, we are the first to propose a human-AI loop approach that iteratively improves machine learning models for event detection. In summary, our work makes the following key contributions:",
"A novel human-AI loop approach for micropost event detection that jointly discovers informative keywords and estimates their expectation;",
"A unified probabilistic model that infers keyword expectation and simultaneously performs model training;",
"An extensive empirical evaluation of our approach on multiple real-world datasets demonstrating that our approach significantly improves the state of the art by an average of 24.3% AUC.",
"The rest of this paper is organized as follows. First, we present our human-AI loop approach in Section SECREF2. Subsequently, we introduce our proposed probabilistic model in Section SECREF3. The experimental setup and results are presented in Section SECREF4. Finally, we briefly cover related work in Section SECREF5 before concluding our work in Section SECREF6."
],
[
"Given a set of labeled and unlabeled microposts, our goal is to extract informative keywords and estimate their expectations in order to train a machine learning model. To achieve this goal, our proposed human-AI loop approach comprises two crowdsourcing tasks, i.e., micropost classification followed by keyword discovery, and a unified probabilistic model for expectation inference and model training. Figure FIGREF6 presents an overview of our approach. Next, we describe our approach from a process-centric perspective.",
"Following previous studies BIBREF1, BIBREF17, BIBREF2, we collect a set of unlabeled microposts $\\mathcal {U}$ from a microblogging platform and post-filter, using an initial (set of) keyword(s), those microposts that are potentially relevant to an event category. Then, we collect a set of event-related microposts (i.e., positively labeled microposts) $\\mathcal {L}$, post-filtering with a list of seed events. $\\mathcal {U}$ and $\\mathcal {L}$ are used together to train a discriminative model (e.g., a deep neural network) for classifying the relevance of microposts to an event. We denote the target model as $p_\\theta (y|x)$, where $\\theta $ is the model parameter to be learned and $y$ is the label of an arbitrary micropost, represented by a bag-of-words vector $x$. Our approach iterates several times $t=\\lbrace 1, 2, \\ldots \\rbrace $ until the performance of the target model converges. Each iteration starts from the initial keyword(s) or the new keyword(s) discovered in the previous iteration. Given such a keyword, denoted by $w^{(t)}$, the iteration starts by sampling microposts containing the keyword from $\\mathcal {U}$, followed by dynamically creating micropost classification tasks and publishing them on a crowdsourcing platform.",
"Micropost Classification. The micropost classification task requires crowd workers to label the selected microposts into two classes: event-related and non event-related. In particular, workers are given instructions and examples to differentiate event-instance related microposts and general event-category related microposts. Consider, for example, the following microposts in the context of Cyber attack events, both containing the keyword `hack':",
"Credit firm Equifax says 143m Americans' social security numbers exposed in hack",
"This micropost describes an instance of a cyber attack event that the target model should identify. This is, therefore, an event-instance related micropost and should be considered as a positive example. Contrast this with the following example:",
"Companies need to step their cyber security up",
"This micropost, though related to cyber security in general, does not mention an instance of a cyber attack event, and is of no interest to us for event detection. This is an example of a general event-category related micropost and should be considered as a negative example.",
"In this task, each selected micropost is labeled by multiple crowd workers. The annotations are passed to our probabilistic model for expectation inference and model training.",
"Expectation Inference & Model Training. Our probabilistic model takes crowd-contributed labels and the model trained in the previous iteration as input. As output, it generates a keyword-specific expectation, denoted as $e^{(t)}$, and an improved version of the micropost classification model, denoted as $p_{\\theta ^{(t)}}(y|x)$. The details of our probabilistic model are given in Section SECREF3.",
"Keyword Discovery. The keyword discovery task aims at discovering a new keyword (or a set of keywords) that is most informative for model training with respect to existing keywords. To this end, we first apply the current model $p_{\\theta ^{(t)}}(y|x)$ on the unlabeled microposts $\\mathcal {U}$. For those that contain the keyword $w^{(t)}$, we calculate the disagreement between the model predictions and the keyword-specific expectation $e^{(t)}$:",
"and select the ones with the highest disagreement for keyword discovery. These selected microposts are supposed to contain information that can explain the disagreement between the model prediction and keyword-specific expectation, and can thus provide information that is most different from the existing set of keywords for model training.",
"For instance, our study shows that the expectation for the keyword `hack' is 0.20, which means only 20% of the initial set of microposts retrieved with the keyword are event-related. A micropost selected with the highest disagreement (Eq. DISPLAY_FORM7), whose likelihood of being event-related as predicted by the model is $99.9\\%$, is shown as an example below:",
"RT @xxx: Hong Kong securities brokers hit by cyber attacks, may face more: regulator #cyber #security #hacking https://t.co/rC1s9CB",
"This micropost contains keywords that can better indicate the relevance to a cyber security event than the initial keyword `hack', e.g., `securities', `hit', and `attack'.",
"Note that when the keyword-specific expectation $e^{(t)}$ in Equation DISPLAY_FORM7 is high, the selected microposts will be the ones that contain keywords indicating the irrelevance of the microposts to an event category. Such keywords are also useful for model training as they help improve the model's ability to identify irrelevant microposts.",
"To identify new keywords in the selected microposts, we again leverage crowdsourcing, as humans are typically better than machines at providing specific explanations BIBREF18, BIBREF19. In the crowdsourcing task, workers are first asked to find those microposts where the model predictions are deemed correct. Then, from those microposts, workers are asked to find the keyword that best indicates the class of the microposts as predicted by the model. The keyword most frequently identified by the workers is then used as the initial keyword for the following iteration. In case multiple keywords are selected, e.g., the top-$N$ frequent ones, workers will be asked to perform $N$ micropost classification tasks for each keyword in the next iteration, and the model training will be performed on multiple keyword-specific expectations."
],
[
"This section introduces our probabilistic model that infers keyword expectation and trains the target model simultaneously. We start by formalizing the problem and introducing our model, before describing the model learning method.",
"Problem Formalization. We consider the problem at iteration $t$ where the corresponding keyword is $w^{(t)}$. In the current iteration, let $\\mathcal {U}^{(t)} \\subset \\mathcal {U}$ denote the set of all microposts containing the keyword and $\\mathcal {M}^{(t)}= \\lbrace x_{m}\\rbrace _{m=1}^M\\subset \\mathcal {U}^{(t)}$ be the randomly selected subset of $M$ microposts labeled by $N$ crowd workers $\\mathcal {C} = \\lbrace c_n\\rbrace _{n=1}^N$. The annotations form a matrix $\\mathbf {A}\\in \\mathbb {R}^{M\\times N}$ where $\\mathbf {A}_{mn}$ is the label for the micropost $x_m$ contributed by crowd worker $c_n$. Our goal is to infer the keyword-specific expectation $e^{(t)}$ and train the target model by learning the model parameter $\\theta ^{(t)}$. An additional parameter of our probabilistic model is the reliability of crowd workers, which is essential when involving crowdsourcing. Following Dawid and Skene BIBREF14, BIBREF16, we represent the annotation reliability of worker $c_n$ by a latent confusion matrix $\\pi ^{(n)}$, where the $rs$-th element $\\pi _{rs}^{(n)}$ denotes the probability of $c_n$ labeling a micropost as class $r$ given the true class $s$."
],
[
"First, we introduce an expectation regularization technique for the weakly supervised learning of the target model $p_{\\theta ^{(t)}}(y|x)$. In this setting, the objective function of the target model is composed of two parts, corresponding to the labeled microposts $\\mathcal {L}$ and the unlabeled ones $\\mathcal {U}$.",
"The former part aims at maximizing the likelihood of the labeled microposts:",
"where we assume that $\\theta $ is generated from a prior distribution (e.g., Laplacian or Gaussian) parameterized by $\\sigma $.",
"To leverage unlabeled data for model training, we make use of the expectations of existing keywords, i.e., {($w^{(1)}$, $e^{(1)}$), ..., ($w^{(t-1)}$, $e^{(t-1)}$), ($w^{(t)}$, $e^{(t)}$)} (Note that $e^{(t)}$ is inferred), as a regularization term to constrain model training. To do so, we first give the model's expectation for each keyword $w^{(k)}$ ($1\\le k\\le t$) as follows:",
"which denotes the empirical expectation of the model’s posterior predictions on the unlabeled microposts $\\mathcal {U}^{(k)}$ containing keyword $w^{(k)}$. Expectation regularization can then be formulated as the regularization of the distance between the Bernoulli distribution parameterized by the model's expectation and the expectation of the existing keyword:",
"where $D_{KL}[\\cdot \\Vert \\cdot ]$ denotes the KL-divergence between the Bernoulli distributions $Ber(e^{(k)})$ and $Ber(\\mathbb {E}_{x\\sim \\mathcal {U}^{(k)}}(y))$, and $\\lambda $ controls the strength of expectation regularization."
],
[
"To learn the keyword-specific expectation $e^{(t)}$ and the crowd worker reliability $\\pi ^{(n)}$ ($1\\le n\\le N$), we model the likelihood of the crowd-contributed labels $\\mathbf {A}$ as a function of these parameters. In this context, we view the expectation as the class prior, thus performing expectation inference as the learning of the class prior. By doing so, we connect expectation inference with model training.",
"Specifically, we model the likelihood of an arbitrary crowd-contributed label $\\mathbf {A}_{mn}$ as a mixture of multinomials where the prior is the keyword-specific expectation $e^{(t)}$:",
"where $e_s^{(t)}$ is the probability of the ground truth label being $s$ given the keyword-specific expectation as the class prior; $K$ is the set of possible ground truth labels (binary in our context); and $r=\\mathbf {A}_{mn}$ is the crowd-contributed label. Then, for an individual micropost $x_m$, the likelihood of crowd-contributed labels $\\mathbf {A}_{m:}$ is given by:",
"Therefore, the objective function for maximizing the likelihood of the entire annotation matrix $\\mathbf {A}$ can be described as:"
],
[
"Integrating model training with expectation inference, the overall objective function of our proposed model is given by:",
"Figure FIGREF18 depicts a graphical representation of our model, which combines the target model for training (on the left) with the generative model for crowd-contributed labels (on the right) through a keyword-specific expectation.",
"Model Learning. Due to the unknown ground truth labels of crowd-annotated microposts ($y_m$ in Figure FIGREF18), we resort to expectation maximization for model learning. The learning algorithm iteratively takes two steps: the E-step and the M-step. The E-step infers the ground truth labels given the current model parameters. The M-step updates the model parameters, including the crowd reliability parameters $\\pi ^{(n)}$ ($1\\le n\\le N$), the keyword-specific expectation $e^{(t)}$, and the parameter of the target model $\\theta ^{(t)}$. The E-step and the crowd parameter update in the M-step are similar to the Dawid-Skene model BIBREF14. The keyword expectation is inferred by taking into account both the crowd-contributed labels and the model prediction:",
"The parameter of the target model is updated by gradient descent. For example, when the target model to be trained is a deep neural network, we use back-propagation with gradient descent to update the weight matrices."
],
[
"This section presents our experimental setup and results for evaluating our approach. We aim at answering the following questions:",
"[noitemsep,leftmargin=*]",
"Q1: How effectively does our proposed human-AI loop approach enhance the state-of-the-art machine learning models for event detection?",
"Q2: How well does our keyword discovery method work compare to existing keyword expansion methods?",
"Q3: How effective is our approach using crowdsourcing at obtaining new keywords compared with an approach labelling microposts for model training under the same cost?",
"Q4: How much benefit does our unified probabilistic model bring compared to methods that do not take crowd reliability into account?"
],
[
"Datasets. We perform our experiments with two predetermined event categories: cyber security (CyberAttack) and death of politicians (PoliticianDeath). These event categories are chosen as they are representative of important event types that are of interest to many governments and companies. The need to create our own dataset was motivated by the lack of public datasets for event detection on microposts. The few available datasets do not suit our requirements. For example, the publicly available Events-2012 Twitter dataset BIBREF20 contains generic event descriptions such as Politics, Sports, Culture etc. Our work targets more specific event categories BIBREF21. Following previous studies BIBREF1, we collect event-related microposts from Twitter using 11 and 8 seed events (see Section SECREF2) for CyberAttack and PoliticianDeath, respectively. Unlabeled microposts are collected by using the keyword `hack' for CyberAttack, while for PoliticianDeath, we use a set of keywords related to `politician' and `death' (such as `bureaucrat', `dead' etc.) For each dataset, we randomly select 500 tweets from the unlabeled subset and manually label them for evaluation. Table TABREF25 shows key statistics from our two datasets.",
"Comparison Methods. To demonstrate the generality of our approach on different event detection models, we consider Logistic Regression (LR) BIBREF1 and Multilayer Perceptron (MLP) BIBREF2 as the target models. As the goal of our experiments is to demonstrate the effectiveness of our approach as a new model training technique, we use these widely used models. Also, we note that in our case other neural network models with more complex network architectures for event detection, such as the bi-directional LSTM BIBREF17, turn out to be less effective than a simple feedforward network. For both LR and MLP, we evaluate our proposed human-AI loop approach for keyword discovery and expectation estimation by comparing against the weakly supervised learning method proposed by BIBREF1 (BIBREF1) and BIBREF17 (BIBREF17) where only one initial keyword is used with an expectation estimated by an individual expert.",
"Parameter Settings. We empirically set optimal parameters based on a held-out validation set that contains 20% of the test data. These include the hyperparamters of the target model, those of our proposed probabilistic model, and the parameters used for training the target model. We explore MLP with 1, 2 and 3 hidden layers and apply a grid search in 32, 64, 128, 256, 512 for the dimension of the embeddings and that of the hidden layers. For the coefficient of expectation regularization, we follow BIBREF6 (BIBREF6) and set it to $\\lambda =10 \\times $ #labeled examples. For model training, we use the Adam BIBREF22 optimization algorithm for both models.",
"Evaluation. Following BIBREF1 (BIBREF1) and BIBREF3 (BIBREF3), we use accuracy and area under the precision-recall curve (AUC) metrics to measure the performance of our proposed approach. We note that due to the imbalance in our datasets (20% positive microposts in CyberAttack and 27% in PoliticianDeath), accuracy is dominated by negative examples; AUC, in comparison, better characterizes the discriminative power of the model.",
"Crowdsourcing. We chose Level 3 workers on the Figure-Eight crowdsourcing platform for our experiments. The inter-annotator agreement in micropost classification is taken into account through the EM algorithm. For keyword discovery, we filter keywords based on the frequency of the keyword being selected by the crowd. In terms of cost-effectiveness, our approach is motivated from the fact that crowdsourced data annotation can be expensive, and is thus designed with minimal crowd involvement. For each iteration, we selected 50 tweets for keyword discovery and 50 tweets for micropost classification per keyword. For a dataset with 80k tweets (e.g., CyberAttack), our approach only requires to manually inspect 800 tweets (for 8 keywords), which is only 1% of the entire dataset."
],
[
"Table TABREF26 reports the evaluation of our approach on both the CyberAttack and PoliticianDeath event categories. Our approach is configured such that each iteration starts with 1 new keyword discovered in the previous iteration.",
"Our approach improves LR by 5.17% (Accuracy) and 18.38% (AUC), and MLP by 10.71% (Accuracy) and 30.27% (AUC) on average. Such significant improvements clearly demonstrate that our approach is effective at improving model performance. We observe that the target models generally converge between the 7th and 9th iteration on both datasets when performance is measured by AUC. The performance can slightly degrade when the models are further trained for more iterations on both datasets. This is likely due to the fact that over time, the newly discovered keywords entail lower novel information for model training. For instance, for the CyberAttack dataset the new keyword in the 9th iteration `election' frequently co-occurs with the keyword `russia' in the 5th iteration (in microposts that connect Russian hackers with US elections), thus bringing limited new information for improving the model performance. As a side remark, we note that the models converge faster when performance is measured by accuracy. Such a comparison result confirms the difference between the metrics and shows the necessity for more keywords to discriminate event-related microposts from non event-related ones."
],
[
"Figure FIGREF31 shows the evaluation of our approach when discovering new informative keywords for model training (see Section SECREF2: Keyword Discovery). We compare our human-AI collaborative way of discovering new keywords against a query expansion (QE) approach BIBREF23, BIBREF24 that leverages word embeddings to find similar words in the latent semantic space. Specifically, we use pre-trained word embeddings based on a large Google News dataset for query expansion. For instance, the top keywords resulting from QE for `politician' are, `deputy',`ministry',`secretary', and `minister'. For each of these keywords, we use the crowd to label a set of tweets and obtain a corresponding expectation.",
"We observe that our approach consistently outperforms QE by an average of $4.62\\%$ and $52.58\\%$ AUC on CyberAttack and PoliticianDeath, respectively. The large gap between the performance improvements for the two datasets is mainly due to the fact that microposts that are relevant for PoliticianDeath are semantically more complex than those for CyberAttack, as they encode noun-verb relationship (e.g., “the king of ... died ...”) rather than a simple verb (e.g., “... hacked.”) for the CyberAttack microposts. QE only finds synonyms of existing keywords related to either `politician' or `death', however cannot find a meaningful keyword that fully characterizes the death of a politician. For instance, QE finds the keywords `kill' and `murder', which are semantically close to `death' but are not specifically relevant to the death of a politician. Unlike QE, our approach identifies keywords that go beyond mere synonyms and that are more directly related to the end task, i.e., discriminating event-related microposts from non related ones. Examples are `demise' and `condolence'. As a remark, we note that in Figure FIGREF31(b), the increase in QE performance on PoliticianDeath is due to the keywords `deputy' and `minister', which happen to be highly indicative of the death of a politician in our dataset; these keywords are also identified by our approach."
],
[
"To demonstrate the cost-effectiveness of using crowdsourcing for obtaining new keywords and consequently, their expectations, we compare the performance of our approach with an approach using crowdsourcing to only label microposts for model training at the same cost. Specifically, we conducted an additional crowdsourcing experiment where the same cost used for keyword discovery in our approach is used to label additional microposts for model training. These newly labeled microposts are used with the microposts labeled in the micropost classification task of our approach (see Section SECREF2: Micropost Classification) and the expectation of the initial keyword to train the model for comparison. The model trained in this way increases AUC by 0.87% for CyberAttack, and by 1.06% for PoliticianDeath; in comparison, our proposed approach increases AUC by 33.42% for PoliticianDeath and by 15.23% for CyberAttack over the baseline presented by BIBREF1). These results show that using crowdsourcing for keyword discovery is significantly more cost-effective than simply using crowdsourcing to get additional labels when training the model."
],
[
"To investigate the effectiveness of our expectation inference method, we compare it against a majority voting approach, a strong baseline in truth inference BIBREF16. Figure FIGREF36 shows the result of this evaluation. We observe that our approach results in better models for both CyberAttack and PoliticianDeath. Our manual investigation reveals that workers' annotations are of high reliability, which explains the relatively good performance of majority voting. Despite limited margin for improvement, our method of expectation inference improves the performance of majority voting by $0.4\\%$ and $1.19\\%$ AUC on CyberAttack and PoliticianDeath, respectively."
],
[
"Event Detection. The techniques for event extraction from microblogging platforms can be classified according to their domain specificity and their detection method BIBREF0. Early works mainly focus on open domain event detection BIBREF25, BIBREF26, BIBREF27. Our work falls into the category of domain-specific event detection BIBREF21, which has drawn increasing attention due to its relevance for various applications such as cyber security BIBREF1, BIBREF2 and public health BIBREF4, BIBREF5. In terms of technique, our proposed detection method is related to the recently proposed weakly supervised learning methods BIBREF1, BIBREF17, BIBREF3. This comes in contrast with fully-supervised learning methods, which are often limited by the size of the training data (e.g., a few hundred examples) BIBREF28, BIBREF29.",
"Human-in-the-Loop Approaches. Our work extends weakly supervised learning methods by involving humans in the loop BIBREF13. Existing human-in-the-loop approaches mainly leverage crowds to label individual data instances BIBREF9, BIBREF10 or to debug the training data BIBREF30, BIBREF31 or components BIBREF32, BIBREF33, BIBREF34 of a machine learning system. Unlike these works, we leverage crowd workers to label sampled microposts in order to obtain keyword-specific expectations, which can then be generalized to help classify microposts containing the same keyword, thus amplifying the utility of the crowd. Our work is further connected to the topic of interpretability and transparency of machine learning models BIBREF11, BIBREF35, BIBREF12, for which humans are increasingly involved, for instance for post-hoc evaluations of the model's interpretability. In contrast, our approach directly solicits informative keywords from the crowd for model training, thereby providing human-understandable explanations for the improved model."
],
[
"In this paper, we presented a new human-AI loop approach for keyword discovery and expectation estimation to better train event detection models. Our approach takes advantage of the disagreement between the crowd and the model to discover informative keywords and leverages the joint power of the crowd and the model in expectation inference. We evaluated our approach on real-world datasets and showed that it significantly outperforms the state of the art and that it is particularly useful for detecting events where relevant microposts are semantically complex, e.g., the death of a politician. As future work, we plan to parallelize the crowdsourcing tasks and optimize our pipeline in order to use our event detection approach in real-time."
],
[
"This project has received funding from the Swiss National Science Foundation (grant #407540_167320 Tighten-it-All) and from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement 683253/GraphInt)."
]
],
"section_name": [
"Introduction",
"The Human-AI Loop Approach",
"Unified Probabilistic Model",
"Unified Probabilistic Model ::: Expectation as Model Posterior",
"Unified Probabilistic Model ::: Expectation as Class Prior",
"Unified Probabilistic Model ::: Unified Probabilistic Model",
"Experiments and Results",
"Experiments and Results ::: Experimental Setup",
"Experiments and Results ::: Results of our Human-AI Loop (Q1)",
"Experiments and Results ::: Comparative Results on Keyword Discovery (Q2)",
"Experiments and Results ::: Cost-Effectiveness Results (Q3)",
"Experiments and Results ::: Expectation Inference Results (Q4)",
"Related Work",
"Conclusion",
"Acknowledgements"
]
} | {
"answers": [
{
"annotation_id": [
"40835ff7baf6a65167a0bb570ef2b8fc62d2a52e",
"e7e3aa98ef49884fec805709fcc27639f13675db"
],
"answer": [
{
"evidence": [
"Datasets. We perform our experiments with two predetermined event categories: cyber security (CyberAttack) and death of politicians (PoliticianDeath). These event categories are chosen as they are representative of important event types that are of interest to many governments and companies. The need to create our own dataset was motivated by the lack of public datasets for event detection on microposts. The few available datasets do not suit our requirements. For example, the publicly available Events-2012 Twitter dataset BIBREF20 contains generic event descriptions such as Politics, Sports, Culture etc. Our work targets more specific event categories BIBREF21. Following previous studies BIBREF1, we collect event-related microposts from Twitter using 11 and 8 seed events (see Section SECREF2) for CyberAttack and PoliticianDeath, respectively. Unlabeled microposts are collected by using the keyword `hack' for CyberAttack, while for PoliticianDeath, we use a set of keywords related to `politician' and `death' (such as `bureaucrat', `dead' etc.) For each dataset, we randomly select 500 tweets from the unlabeled subset and manually label them for evaluation. Table TABREF25 shows key statistics from our two datasets."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Following previous studies BIBREF1, we collect event-related microposts from Twitter using 11 and 8 seed events (see Section SECREF2) for CyberAttack and PoliticianDeath, respectively. Unlabeled microposts are collected by using the keyword `hack' for CyberAttack, while for PoliticianDeath, we use a set of keywords related to `politician' and `death' (such as `bureaucrat', `dead' etc.)"
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"452482d9de8638fe92341e384e547474024f458b",
"8e893bc7d14a8c3786d2ebd0bae85a88db97464e"
],
"answer": [
{
"evidence": [
"This section introduces our probabilistic model that infers keyword expectation and trains the target model simultaneously. We start by formalizing the problem and introducing our model, before describing the model learning method."
],
"extractive_spans": [
"probabilistic model"
],
"free_form_answer": "",
"highlighted_evidence": [
"This section introduces our probabilistic model that infers keyword expectation and trains the target model simultaneously."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Comparison Methods. To demonstrate the generality of our approach on different event detection models, we consider Logistic Regression (LR) BIBREF1 and Multilayer Perceptron (MLP) BIBREF2 as the target models. As the goal of our experiments is to demonstrate the effectiveness of our approach as a new model training technique, we use these widely used models. Also, we note that in our case other neural network models with more complex network architectures for event detection, such as the bi-directional LSTM BIBREF17, turn out to be less effective than a simple feedforward network. For both LR and MLP, we evaluate our proposed human-AI loop approach for keyword discovery and expectation estimation by comparing against the weakly supervised learning method proposed by BIBREF1 (BIBREF1) and BIBREF17 (BIBREF17) where only one initial keyword is used with an expectation estimated by an individual expert."
],
"extractive_spans": [
"Logistic Regression",
"Multilayer Perceptron"
],
"free_form_answer": "",
"highlighted_evidence": [
"To demonstrate the generality of our approach on different event detection models, we consider Logistic Regression (LR) BIBREF1 and Multilayer Perceptron (MLP) BIBREF2 as the target models."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"32f37720950ba7ec715734312b85876070931002",
"9cb62e60f4f395a1ff3e53c5d3cb6caa2a31ba49"
],
"answer": [
{
"evidence": [
"Datasets. We perform our experiments with two predetermined event categories: cyber security (CyberAttack) and death of politicians (PoliticianDeath). These event categories are chosen as they are representative of important event types that are of interest to many governments and companies. The need to create our own dataset was motivated by the lack of public datasets for event detection on microposts. The few available datasets do not suit our requirements. For example, the publicly available Events-2012 Twitter dataset BIBREF20 contains generic event descriptions such as Politics, Sports, Culture etc. Our work targets more specific event categories BIBREF21. Following previous studies BIBREF1, we collect event-related microposts from Twitter using 11 and 8 seed events (see Section SECREF2) for CyberAttack and PoliticianDeath, respectively. Unlabeled microposts are collected by using the keyword `hack' for CyberAttack, while for PoliticianDeath, we use a set of keywords related to `politician' and `death' (such as `bureaucrat', `dead' etc.) For each dataset, we randomly select 500 tweets from the unlabeled subset and manually label them for evaluation. Table TABREF25 shows key statistics from our two datasets."
],
"extractive_spans": [],
"free_form_answer": "Tweets related to CyberAttack and tweets related to PoliticianDeath",
"highlighted_evidence": [
"Following previous studies BIBREF1, we collect event-related microposts from Twitter using 11 and 8 seed events (see Section SECREF2) for CyberAttack and PoliticianDeath, respectively."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Datasets. We perform our experiments with two predetermined event categories: cyber security (CyberAttack) and death of politicians (PoliticianDeath). These event categories are chosen as they are representative of important event types that are of interest to many governments and companies. The need to create our own dataset was motivated by the lack of public datasets for event detection on microposts. The few available datasets do not suit our requirements. For example, the publicly available Events-2012 Twitter dataset BIBREF20 contains generic event descriptions such as Politics, Sports, Culture etc. Our work targets more specific event categories BIBREF21. Following previous studies BIBREF1, we collect event-related microposts from Twitter using 11 and 8 seed events (see Section SECREF2) for CyberAttack and PoliticianDeath, respectively. Unlabeled microposts are collected by using the keyword `hack' for CyberAttack, while for PoliticianDeath, we use a set of keywords related to `politician' and `death' (such as `bureaucrat', `dead' etc.) For each dataset, we randomly select 500 tweets from the unlabeled subset and manually label them for evaluation. Table TABREF25 shows key statistics from our two datasets."
],
"extractive_spans": [
"cyber security (CyberAttack)",
"death of politicians (PoliticianDeath)"
],
"free_form_answer": "",
"highlighted_evidence": [
"We perform our experiments with two predetermined event categories: cyber security (CyberAttack) and death of politicians (PoliticianDeath)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"2bf0eb023442571abc37860425e1ee162fcca33f",
"ab5175bcd7287c432072612b03032d992c5e9bc8"
],
"answer": [
{
"evidence": [
"Human-in-the-Loop Approaches. Our work extends weakly supervised learning methods by involving humans in the loop BIBREF13. Existing human-in-the-loop approaches mainly leverage crowds to label individual data instances BIBREF9, BIBREF10 or to debug the training data BIBREF30, BIBREF31 or components BIBREF32, BIBREF33, BIBREF34 of a machine learning system. Unlike these works, we leverage crowd workers to label sampled microposts in order to obtain keyword-specific expectations, which can then be generalized to help classify microposts containing the same keyword, thus amplifying the utility of the crowd. Our work is further connected to the topic of interpretability and transparency of machine learning models BIBREF11, BIBREF35, BIBREF12, for which humans are increasingly involved, for instance for post-hoc evaluations of the model's interpretability. In contrast, our approach directly solicits informative keywords from the crowd for model training, thereby providing human-understandable explanations for the improved model."
],
"extractive_spans": [],
"free_form_answer": "By involving humans for post-hoc evaluation of model's interpretability",
"highlighted_evidence": [
"Our work is further connected to the topic of interpretability and transparency of machine learning models BIBREF11, BIBREF35, BIBREF12, for which humans are increasingly involved, for instance for post-hoc evaluations of the model's interpretability. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Human-in-the-Loop Approaches. Our work extends weakly supervised learning methods by involving humans in the loop BIBREF13. Existing human-in-the-loop approaches mainly leverage crowds to label individual data instances BIBREF9, BIBREF10 or to debug the training data BIBREF30, BIBREF31 or components BIBREF32, BIBREF33, BIBREF34 of a machine learning system. Unlike these works, we leverage crowd workers to label sampled microposts in order to obtain keyword-specific expectations, which can then be generalized to help classify microposts containing the same keyword, thus amplifying the utility of the crowd. Our work is further connected to the topic of interpretability and transparency of machine learning models BIBREF11, BIBREF35, BIBREF12, for which humans are increasingly involved, for instance for post-hoc evaluations of the model's interpretability. In contrast, our approach directly solicits informative keywords from the crowd for model training, thereby providing human-understandable explanations for the improved model."
],
"extractive_spans": [
"directly solicits informative keywords from the crowd for model training, thereby providing human-understandable explanations for the improved model"
],
"free_form_answer": "",
"highlighted_evidence": [
"In contrast, our approach directly solicits informative keywords from the crowd for model training, thereby providing human-understandable explanations for the improved model."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"1178a1aa40237a93e83e6b6ceb4624ade0b998c8",
"8607351195ddbf243f31644ecc36a04c7bff2dbb"
],
"answer": [
{
"evidence": [
"Our approach improves LR by 5.17% (Accuracy) and 18.38% (AUC), and MLP by 10.71% (Accuracy) and 30.27% (AUC) on average. Such significant improvements clearly demonstrate that our approach is effective at improving model performance. We observe that the target models generally converge between the 7th and 9th iteration on both datasets when performance is measured by AUC. The performance can slightly degrade when the models are further trained for more iterations on both datasets. This is likely due to the fact that over time, the newly discovered keywords entail lower novel information for model training. For instance, for the CyberAttack dataset the new keyword in the 9th iteration `election' frequently co-occurs with the keyword `russia' in the 5th iteration (in microposts that connect Russian hackers with US elections), thus bringing limited new information for improving the model performance. As a side remark, we note that the models converge faster when performance is measured by accuracy. Such a comparison result confirms the difference between the metrics and shows the necessity for more keywords to discriminate event-related microposts from non event-related ones."
],
"extractive_spans": [
"significant improvements clearly demonstrate that our approach is effective at improving model performance"
],
"free_form_answer": "",
"highlighted_evidence": [
"Our approach improves LR by 5.17% (Accuracy) and 18.38% (AUC), and MLP by 10.71% (Accuracy) and 30.27% (AUC) on average. Such significant improvements clearly demonstrate that our approach is effective at improving model performance."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Evaluation. Following BIBREF1 (BIBREF1) and BIBREF3 (BIBREF3), we use accuracy and area under the precision-recall curve (AUC) metrics to measure the performance of our proposed approach. We note that due to the imbalance in our datasets (20% positive microposts in CyberAttack and 27% in PoliticianDeath), accuracy is dominated by negative examples; AUC, in comparison, better characterizes the discriminative power of the model."
],
"extractive_spans": [],
"free_form_answer": "By evaluating the performance of the approach using accuracy and AUC",
"highlighted_evidence": [
"Following BIBREF1 (BIBREF1) and BIBREF3 (BIBREF3), we use accuracy and area under the precision-recall curve (AUC) metrics to measure the performance of our proposed approach. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"529250001f7cb8764f6c360dcc5ec83f08129bb0"
],
"answer": [
{
"evidence": [
"To identify new keywords in the selected microposts, we again leverage crowdsourcing, as humans are typically better than machines at providing specific explanations BIBREF18, BIBREF19. In the crowdsourcing task, workers are first asked to find those microposts where the model predictions are deemed correct. Then, from those microposts, workers are asked to find the keyword that best indicates the class of the microposts as predicted by the model. The keyword most frequently identified by the workers is then used as the initial keyword for the following iteration. In case multiple keywords are selected, e.g., the top-$N$ frequent ones, workers will be asked to perform $N$ micropost classification tasks for each keyword in the next iteration, and the model training will be performed on multiple keyword-specific expectations."
],
"extractive_spans": [
"workers are first asked to find those microposts where the model predictions are deemed correct",
" asked to find the keyword that best indicates the class of the microposts"
],
"free_form_answer": "",
"highlighted_evidence": [
"To identify new keywords in the selected microposts, we again leverage crowdsourcing, as humans are typically better than machines at providing specific explanations BIBREF18, BIBREF19. In the crowdsourcing task, workers are first asked to find those microposts where the model predictions are deemed correct. Then, from those microposts, workers are asked to find the keyword that best indicates the class of the microposts as predicted by the model."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no",
"no"
],
"question": [
"Do they report results only on English data?",
"What type of classifiers are used?",
"Which real-world datasets are used?",
"How are the interpretability merits of the approach demonstrated?",
"How are the accuracy merits of the approach demonstrated?",
"How is the keyword specific expectation elicited from the crowd?"
],
"question_id": [
"4ac2c3c259024d7cd8e449600b499f93332dab60",
"bc730e4d964b6a66656078e2da130310142ab641",
"3941401a182a3d6234894a5c8a75d48c6116c45c",
"67e9e147b2cab5ba43572ce8a17fc863690172f0",
"a74190189a6ced2a2d5b781e445e36f4e527e82a",
"43f074bacabd0a355b4e0f91a1afd538c0a6244f"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"search_query": [
"twitter",
"twitter",
"twitter",
"twitter",
"twitter",
"twitter"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: An overview of our proposed human-AI loop approach. Starting from a (set of) new keyword(s), our approach is based on the following processes: 1) Micropost Classification, which samples a subset of the unlabeled microposts containing the keyword and asks crowd workers to label these microposts; 2) Expectation Inference & Model Training, which generates a keyword-specific expectation and a micropost classification model for event detection; 3) Keyword Discovery, which applies the trained model and calculates the disagreement between model prediction and the keyword-specific expectation for discovering new keywords, again by leveraging crowdsourcing.",
"Figure 2: Our proposed probabilistic model contains the target model (on the left) and the generative model for crowdcontributed labels (on the right), connected by keywordspecific expectation.",
"Table 1: Statistics of the datasets in our experiments.",
"Table 2: Performance of the target models trained by our proposed human-AI loop approach on the experimental datasets at different iterations. Results are given in percentage.",
"Figure 3: Comparison between our keyword discovery method and query expansion method for MLP (similar results for LR).",
"Figure 4: Comparison between our expectation inference method and majority voting for MLP (similar results for LR)."
],
"file": [
"3-Figure1-1.png",
"4-Figure2-1.png",
"5-Table1-1.png",
"6-Table2-1.png",
"6-Figure3-1.png",
"7-Figure4-1.png"
]
} | [
"Which real-world datasets are used?",
"How are the interpretability merits of the approach demonstrated?",
"How are the accuracy merits of the approach demonstrated?"
] | [
[
"1912.00667-Experiments and Results ::: Experimental Setup-0"
],
[
"1912.00667-Related Work-1"
],
[
"1912.00667-Experiments and Results ::: Experimental Setup-3",
"1912.00667-Experiments and Results ::: Results of our Human-AI Loop (Q1)-1"
]
] | [
"Tweets related to CyberAttack and tweets related to PoliticianDeath",
"By involving humans for post-hoc evaluation of model's interpretability",
"By evaluating the performance of the approach using accuracy and AUC"
] | 120 |
1912.08904 | Macaw: An Extensible Conversational Information Seeking Platform | Conversational information seeking (CIS) has been recognized as a major emerging research area in information retrieval. Such research will require data and tools, to allow the implementation and study of conversational systems. This paper introduces Macaw, an open-source framework with a modular architecture for CIS research. Macaw supports multi-turn, multi-modal, and mixed-initiative interactions, and enables research for tasks such as document retrieval, question answering, recommendation, and structured data exploration. It has a modular design to encourage the study of new CIS algorithms, which can be evaluated in batch mode. It can also integrate with a user interface, which allows user studies and data collection in an interactive mode, where the back end can be fully algorithmic or a wizard of oz setup. Macaw is distributed under the MIT License. | {
"paragraphs": [
[
"The rapid growth in speech and small screen interfaces, particularly on mobile devices, has significantly influenced the way users interact with intelligent systems to satisfy their information needs. The growing interest in personal digital assistants, such as Amazon Alexa, Apple Siri, Google Assistant, and Microsoft Cortana, demonstrates the willingness of users to employ conversational interactions BIBREF0. As a result, conversational information seeking (CIS) has been recognized as a major emerging research area in the Third Strategic Workshop on Information Retrieval (SWIRL 2018) BIBREF1.",
"Research progress in CIS relies on the availability of resources to the community. There have been recent efforts on providing data for various CIS tasks, such as the TREC 2019 Conversational Assistance Track (CAsT), MISC BIBREF2, Qulac BIBREF3, CoQA BIBREF4, QuAC BIBREF5, SCS BIBREF6, and CCPE-M BIBREF7. In addition, BIBREF8 have implemented a demonstration for conversational movie recommendation based on Google's DialogFlow. Despite all of these resources, the community still feels the lack of a suitable platform for developing CIS systems. We believe that providing such platform will speed up the progress in conversational information seeking research. Therefore, we developed a general framework for supporting CIS research. The framework is called Macaw. This paper describes the high-level architecture of Macaw, the supported functionality, and our future vision. Researchers working on various CIS tasks should be able to take advantage of Macaw in their projects.",
"Macaw is designed based on a modular architecture to support different information seeking tasks, including conversational search, conversational question answering, conversational recommendation, and conversational natural language interface to structured and semi-structured data. Each interaction in Macaw (from both user and system) is a Message object, thus a conversation is a list of Messages. Macaw consists of multiple actions, each action is a module that can satisfy the information needs of users for some requests. For example, search and question answering can be two actions in Macaw. Even multiple search algorithms can be also seen as multiple actions. Each action can produce multiple outputs (e.g., multiple retrieved documents). For every user interaction, Macaw runs all actions in parallel. The actions' outputs produced within a predefined time interval (i.e., an interaction timeout constant) are then post-processed. Macaw can choose one or combine multiple of these outputs and prepare an output Message object as the user's response.",
"The modular design of Macaw makes it relatively easy to configure a different user interface or add a new one. The current implementation of Macaw supports a command line interface as well as mobile, desktop, and web apps. In more detail, Macaw's interface can be a Telegram bot, which supports a wide range of devices and operating systems (see FIGREF4). This allows Macaw to support multi-modal interactions, such as text, speech, image, click, etc. A number of APIs for automatic speech recognition and generation have been employed to support speech interactions. Note that the Macaw's architecture and implementation allows mixed-initiative interactions.",
"The research community can benefit from Macaw for the following purposes:",
"[leftmargin=*]",
"Developing algorithms, tools, and techniques for CIS.",
"Studying user interactions with CIS systems.",
"Performing CIS studies based on an intermediary person and wizard of oz.",
"Preparing quick demonstration for a developed CIS model."
],
[
"Macaw has a modular design, with the goal of making it easy to configure and add new modules such as a different user interface or different retrieval module. The overall setup also follows a Model-View-Controller (MVC) like architecture. The design decisions have been made to smooth the Macaw's adoptions and extensions. Macaw is implemented in Python, thus machine learning models implemented using PyTorch, Scikit-learn, or TensorFlow can be easily integrated into Macaw. The high-level overview of Macaw is depicted in FIGREF8. The user interacts with the interface and the interface produces a Message object from the current interaction of user. The interaction can be in multi-modal form, such as text, speech, image, and click. Macaw stores all interactions in an “Interaction Database”. For every interaction, Macaw looks for most recent user-system interactions (including the system's responses) to create a list of Messages, called the conversation list. It is then dispatched to multiple information seeking (and related) actions. The actions run in parallel, and each should respond within a pre-defined time interval. The output selection component selects from (or potentially combines) the outputs generated by different actions and creates a Message object as the system's response. This message is logged into the interaction database and is sent to the interface to be presented to the user. Again, the response message can be multi-modal and include text, speech, link, list of options, etc.",
"Macaw also supports Wizard of Oz studies or intermediary-based information seeking studies. The architecture of Macaw for such setup is presented in FIGREF16. As shown in the figure, the seeker interacts with a real conversational interface that supports multi-modal and mixed-initiative interactions in multiple devices. The intermediary (or the wizard) receives the seeker's message and performs different information seeking actions with Macaw. All seeker-intermediary and intermediary-system interactions will be logged for further analysis. This setup can simulate an ideal CIS system and thus is useful for collecting high-quality data from real users for CIS research."
],
[
"The overview of retrieval and question answering actions in Macaw is shown in FIGREF17. These actions consist of the following components:",
"[leftmargin=*]",
"Co-Reference Resolution: To support multi-turn interactions, it is sometimes necessary to use co-reference resolution techniques for effective retrieval. In Macaw, we identify all the co-references from the last request of user to the conversation history. The same co-reference resolution outputs can be used for different query generation components. This can be a generic or action-specific component.",
"Query Generation: This component generates a query based on the past user-system interactions. The query generation component may take advantage of co-reference resolution for query expansion or re-writing.",
"Retrieval Model: This is the core ranking component that retrieves documents or passages from a large collection. Macaw can retrieve documents from an arbitrary document collection using the Indri python interface BIBREF9, BIBREF10. We also provide the support for web search using the Bing Web Search API. Macaw also allows multi-stage document re-ranking.",
"Result Generation: The retrieved documents can be too long to be presented using some interfaces. Result generation is basically a post-processing step ran on the retrieved result list. In case of question answering, it can employ answer selection or generation techniques, such as machine reading comprehension models. For example, Macaw features the DrQA model BIBREF11 for question answering.",
"These components are implemented in a generic form, so researchers can easily replace them with their own favorite algorithms."
],
[
"We have implemented the following interfaces for Macaw:",
"[leftmargin=*]",
"File IO: This interface is designed for experimental purposes, such as evaluating the performance of a conversational search technique on a dataset with multiple queries. This is not an interactive interface.",
"Standard IO: This interactive command line interface is designed for development purposes to interact with the system, see the logs, and debug or improve the system.",
"Telegram: This interactive interface is designed for interaction with real users (see FIGREF4). Telegram is a popular instant messaging service whose client-side code is open-source. We have implemented a Telegram bot that can be used with different devices (personal computers, tablets, and mobile phones) and different operating systems (Android, iOS, Linux, Mac OS, and Windows). This interface allows multi-modal interactions (text, speech, click, image). It can be also used for speech-only interactions. For speech recognition and generation, Macaw relies on online APIs, e.g., the services provided by Google Cloud and Microsoft Azure. In addition, there exist multiple popular groups and channels in Telegram, which allows further integration of social networks with conversational systems. For example, see the Naseri and Zamani's study on news popularity in Telegram BIBREF12.",
"Similar to the other modules, one can easily extend Macaw using other appropriate user interfaces."
],
[
"The current implementation of Macaw lacks the following actions. We intend to incrementally improve Macaw by supporting more actions and even more advanced techniques for the developed actions.",
"[leftmargin=*]",
"Clarification and Preference Elicitation: Asking clarifying questions has been recently recognized as a necessary component in a conversational system BIBREF3, BIBREF7. The authors are not aware of a published solution for generating clarifying questions using public resources. Therefore, Macaw does not currently support clarification.",
"Explanation: Despite its importance, result list explanation is also a relatively less explored topic. We intend to extend Macaw with result list explanation as soon as we find a stable and mature solution.",
"Recommendation: In our first release, we focus on conversational search and question answering tasks. We intend to provide support for conversational recommendation, e.g., BIBREF13, BIBREF14, BIBREF15, and joint search and recommendation, e.g., BIBREF16, BIBREF17, in the future.",
"Natural Language Interface: Macaw can potentially support access to structured data, such as knowledge graph. We would like to ease conversational natural language interface to structured and semi-structured data in our future releases."
],
[
"Macaw is distributed under the MIT License. We welcome contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com. This project has adopted the Microsoft Open Source Code of Conduct.",
"When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA."
],
[
"This paper described Macaw, an open-source platform for conversational information seeking research. Macaw supports multi-turn, multi-modal, and mixed-initiative interactions. It was designed based on a modular architecture that allows further improvements and extensions. Researchers can benefit from Macaw for developing algorithms and techniques for conversational information seeking research, for user studies with different interfaces, for data collection from real users, and for preparing a demonstration of a CIS model."
],
[
"The authors wish to thank Ahmed Hassan Awadallah, Krisztian Balog, and Arjen P. de Vries for their invaluable feedback."
]
],
"section_name": [
"Introduction",
"Macaw Architecture",
"Retrieval and Question Answering in Macaw",
"User Interfaces",
"Limitations and Future Work",
"Contribution",
"Conclusions",
"Acknowledgements"
]
} | {
"answers": [
{
"annotation_id": [
"29ef81522bbbf4facade168a4168810a999af0c6",
"aab5343f6b18339bce0eff4fd37fa19c43fdaf5b"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": false,
"yes_no": false
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"dc3777893ff991d532c9322acc44c27ba08f6665",
"fc476553658428281b1404241a3ac37b38219284"
],
"answer": [
{
"evidence": [
"Co-Reference Resolution: To support multi-turn interactions, it is sometimes necessary to use co-reference resolution techniques for effective retrieval. In Macaw, we identify all the co-references from the last request of user to the conversation history. The same co-reference resolution outputs can be used for different query generation components. This can be a generic or action-specific component.",
"Query Generation: This component generates a query based on the past user-system interactions. The query generation component may take advantage of co-reference resolution for query expansion or re-writing.",
"Retrieval Model: This is the core ranking component that retrieves documents or passages from a large collection. Macaw can retrieve documents from an arbitrary document collection using the Indri python interface BIBREF9, BIBREF10. We also provide the support for web search using the Bing Web Search API. Macaw also allows multi-stage document re-ranking.",
"Result Generation: The retrieved documents can be too long to be presented using some interfaces. Result generation is basically a post-processing step ran on the retrieved result list. In case of question answering, it can employ answer selection or generation techniques, such as machine reading comprehension models. For example, Macaw features the DrQA model BIBREF11 for question answering."
],
"extractive_spans": [
"Co-Reference Resolution",
"Query Generation",
"Retrieval Model",
"Result Generation"
],
"free_form_answer": "",
"highlighted_evidence": [
"Co-Reference Resolution: To support multi-turn interactions, it is sometimes necessary to use co-reference resolution techniques for effective retrieval. In Macaw, we identify all the co-references from the last request of user to the conversation history. The same co-reference resolution outputs can be used for different query generation components. This can be a generic or action-specific component.\n\nQuery Generation: This component generates a query based on the past user-system interactions. The query generation component may take advantage of co-reference resolution for query expansion or re-writing.\n\nRetrieval Model: This is the core ranking component that retrieves documents or passages from a large collection. Macaw can retrieve documents from an arbitrary document collection using the Indri python interface BIBREF9, BIBREF10. We also provide the support for web search using the Bing Web Search API. Macaw also allows multi-stage document re-ranking.\n\nResult Generation: The retrieved documents can be too long to be presented using some interfaces. Result generation is basically a post-processing step ran on the retrieved result list. In case of question answering, it can employ answer selection or generation techniques, such as machine reading comprehension models. For example, Macaw features the DrQA model BIBREF11 for question answering."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Macaw is designed based on a modular architecture to support different information seeking tasks, including conversational search, conversational question answering, conversational recommendation, and conversational natural language interface to structured and semi-structured data. Each interaction in Macaw (from both user and system) is a Message object, thus a conversation is a list of Messages. Macaw consists of multiple actions, each action is a module that can satisfy the information needs of users for some requests. For example, search and question answering can be two actions in Macaw. Even multiple search algorithms can be also seen as multiple actions. Each action can produce multiple outputs (e.g., multiple retrieved documents). For every user interaction, Macaw runs all actions in parallel. The actions' outputs produced within a predefined time interval (i.e., an interaction timeout constant) are then post-processed. Macaw can choose one or combine multiple of these outputs and prepare an output Message object as the user's response."
],
"extractive_spans": [
"conversational search",
"conversational question answering",
"conversational recommendation",
"conversational natural language interface to structured and semi-structured data"
],
"free_form_answer": "",
"highlighted_evidence": [
"Macaw is designed based on a modular architecture to support different information seeking tasks, including conversational search, conversational question answering, conversational recommendation, and conversational natural language interface to structured and semi-structured data."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"55d2294e269c04afd444c4993572c0cd2c68d369",
"f600861bb2547bd97f257fdd9a0b130ac28e529c"
],
"answer": [
{
"evidence": [
"Macaw also supports Wizard of Oz studies or intermediary-based information seeking studies. The architecture of Macaw for such setup is presented in FIGREF16. As shown in the figure, the seeker interacts with a real conversational interface that supports multi-modal and mixed-initiative interactions in multiple devices. The intermediary (or the wizard) receives the seeker's message and performs different information seeking actions with Macaw. All seeker-intermediary and intermediary-system interactions will be logged for further analysis. This setup can simulate an ideal CIS system and thus is useful for collecting high-quality data from real users for CIS research."
],
"extractive_spans": [
"seeker interacts with a real conversational interface",
"intermediary (or the wizard) receives the seeker's message and performs different information seeking actions"
],
"free_form_answer": "",
"highlighted_evidence": [
"Macaw also supports Wizard of Oz studies or intermediary-based information seeking studies. The architecture of Macaw for such setup is presented in FIGREF16. As shown in the figure, the seeker interacts with a real conversational interface that supports multi-modal and mixed-initiative interactions in multiple devices. The intermediary (or the wizard) receives the seeker's message and performs different information seeking actions with Macaw. All seeker-intermediary and intermediary-system interactions will be logged for further analysis."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Macaw also supports Wizard of Oz studies or intermediary-based information seeking studies. The architecture of Macaw for such setup is presented in FIGREF16. As shown in the figure, the seeker interacts with a real conversational interface that supports multi-modal and mixed-initiative interactions in multiple devices. The intermediary (or the wizard) receives the seeker's message and performs different information seeking actions with Macaw. All seeker-intermediary and intermediary-system interactions will be logged for further analysis. This setup can simulate an ideal CIS system and thus is useful for collecting high-quality data from real users for CIS research."
],
"extractive_spans": [],
"free_form_answer": "a setup where the seeker interacts with a real conversational interface and the wizard, an intermediary, performs actions related to the seeker's message",
"highlighted_evidence": [
"Macaw also supports Wizard of Oz studies or intermediary-based information seeking studies. The architecture of Macaw for such setup is presented in FIGREF16. As shown in the figure, the seeker interacts with a real conversational interface that supports multi-modal and mixed-initiative interactions in multiple devices. The intermediary (or the wizard) receives the seeker's message and performs different information seeking actions with Macaw. All seeker-intermediary and intermediary-system interactions will be logged for further analysis. This setup can simulate an ideal CIS system and thus is useful for collecting high-quality data from real users for CIS research."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"ddf1457dde6a63a6585ec4f52f4d6ea1984e52eb",
"e87df77ac666b446c873ae0e1706b103d36f7d42"
],
"answer": [
{
"evidence": [
"We have implemented the following interfaces for Macaw:",
"[leftmargin=*]",
"File IO: This interface is designed for experimental purposes, such as evaluating the performance of a conversational search technique on a dataset with multiple queries. This is not an interactive interface.",
"Standard IO: This interactive command line interface is designed for development purposes to interact with the system, see the logs, and debug or improve the system.",
"Telegram: This interactive interface is designed for interaction with real users (see FIGREF4). Telegram is a popular instant messaging service whose client-side code is open-source. We have implemented a Telegram bot that can be used with different devices (personal computers, tablets, and mobile phones) and different operating systems (Android, iOS, Linux, Mac OS, and Windows). This interface allows multi-modal interactions (text, speech, click, image). It can be also used for speech-only interactions. For speech recognition and generation, Macaw relies on online APIs, e.g., the services provided by Google Cloud and Microsoft Azure. In addition, there exist multiple popular groups and channels in Telegram, which allows further integration of social networks with conversational systems. For example, see the Naseri and Zamani's study on news popularity in Telegram BIBREF12."
],
"extractive_spans": [
"File IO",
"Standard IO",
"Telegram"
],
"free_form_answer": "",
"highlighted_evidence": [
"We have implemented the following interfaces for Macaw:\n\n[leftmargin=*]\n\nFile IO: This interface is designed for experimental purposes, such as evaluating the performance of a conversational search technique on a dataset with multiple queries. This is not an interactive interface.\n\nStandard IO: This interactive command line interface is designed for development purposes to interact with the system, see the logs, and debug or improve the system.\n\nTelegram: This interactive interface is designed for interaction with real users (see FIGREF4). Telegram is a popular instant messaging service whose client-side code is open-source. We have implemented a Telegram bot that can be used with different devices (personal computers, tablets, and mobile phones) and different operating systems (Android, iOS, Linux, Mac OS, and Windows). This interface allows multi-modal interactions (text, speech, click, image). It can be also used for speech-only interactions. For speech recognition and generation, Macaw relies on online APIs, e.g., the services provided by Google Cloud and Microsoft Azure. In addition, there exist multiple popular groups and channels in Telegram, which allows further integration of social networks with conversational systems. For example, see the Naseri and Zamani's study on news popularity in Telegram BIBREF12."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The modular design of Macaw makes it relatively easy to configure a different user interface or add a new one. The current implementation of Macaw supports a command line interface as well as mobile, desktop, and web apps. In more detail, Macaw's interface can be a Telegram bot, which supports a wide range of devices and operating systems (see FIGREF4). This allows Macaw to support multi-modal interactions, such as text, speech, image, click, etc. A number of APIs for automatic speech recognition and generation have been employed to support speech interactions. Note that the Macaw's architecture and implementation allows mixed-initiative interactions."
],
"extractive_spans": [
"The current implementation of Macaw supports a command line interface as well as mobile, desktop, and web apps."
],
"free_form_answer": "",
"highlighted_evidence": [
"The current implementation of Macaw supports a command line interface as well as mobile, desktop, and web apps."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"40c72ee3c884a0456fc48639937ed20e6cce5de7"
],
"answer": [
{
"evidence": [
"The modular design of Macaw makes it relatively easy to configure a different user interface or add a new one. The current implementation of Macaw supports a command line interface as well as mobile, desktop, and web apps. In more detail, Macaw's interface can be a Telegram bot, which supports a wide range of devices and operating systems (see FIGREF4). This allows Macaw to support multi-modal interactions, such as text, speech, image, click, etc. A number of APIs for automatic speech recognition and generation have been employed to support speech interactions. Note that the Macaw's architecture and implementation allows mixed-initiative interactions."
],
"extractive_spans": [
"text, speech, image, click, etc"
],
"free_form_answer": "",
"highlighted_evidence": [
"This allows Macaw to support multi-modal interactions, such as text, speech, image, click, etc."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"121c7f194cffbfe8ea3bf64e8197b9ccdc114734",
"e0c9834bf81d5a27133d8331ad87c42b03b3be74"
],
"answer": [
{
"evidence": [
"The overview of retrieval and question answering actions in Macaw is shown in FIGREF17. These actions consist of the following components:",
"[leftmargin=*]",
"Co-Reference Resolution: To support multi-turn interactions, it is sometimes necessary to use co-reference resolution techniques for effective retrieval. In Macaw, we identify all the co-references from the last request of user to the conversation history. The same co-reference resolution outputs can be used for different query generation components. This can be a generic or action-specific component.",
"Query Generation: This component generates a query based on the past user-system interactions. The query generation component may take advantage of co-reference resolution for query expansion or re-writing.",
"Retrieval Model: This is the core ranking component that retrieves documents or passages from a large collection. Macaw can retrieve documents from an arbitrary document collection using the Indri python interface BIBREF9, BIBREF10. We also provide the support for web search using the Bing Web Search API. Macaw also allows multi-stage document re-ranking.",
"Result Generation: The retrieved documents can be too long to be presented using some interfaces. Result generation is basically a post-processing step ran on the retrieved result list. In case of question answering, it can employ answer selection or generation techniques, such as machine reading comprehension models. For example, Macaw features the DrQA model BIBREF11 for question answering."
],
"extractive_spans": [
"Co-Reference Resolution",
"Query Generation",
"Retrieval Model",
"Result Generation"
],
"free_form_answer": "",
"highlighted_evidence": [
"These actions consist of the following components:\n\n[leftmargin=*]\n\nCo-Reference Resolution: To support multi-turn interactions, it is sometimes necessary to use co-reference resolution techniques for effective retrieval.",
"Query Generation: This component generates a query based on the past user-system interactions.",
"Retrieval Model: This is the core ranking component that retrieves documents or passages from a large collection.",
"Result Generation: The retrieved documents can be too long to be presented using some interfaces."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Macaw has a modular design, with the goal of making it easy to configure and add new modules such as a different user interface or different retrieval module. The overall setup also follows a Model-View-Controller (MVC) like architecture. The design decisions have been made to smooth the Macaw's adoptions and extensions. Macaw is implemented in Python, thus machine learning models implemented using PyTorch, Scikit-learn, or TensorFlow can be easily integrated into Macaw. The high-level overview of Macaw is depicted in FIGREF8. The user interacts with the interface and the interface produces a Message object from the current interaction of user. The interaction can be in multi-modal form, such as text, speech, image, and click. Macaw stores all interactions in an “Interaction Database”. For every interaction, Macaw looks for most recent user-system interactions (including the system's responses) to create a list of Messages, called the conversation list. It is then dispatched to multiple information seeking (and related) actions. The actions run in parallel, and each should respond within a pre-defined time interval. The output selection component selects from (or potentially combines) the outputs generated by different actions and creates a Message object as the system's response. This message is logged into the interaction database and is sent to the interface to be presented to the user. Again, the response message can be multi-modal and include text, speech, link, list of options, etc.",
"The overview of retrieval and question answering actions in Macaw is shown in FIGREF17. These actions consist of the following components:",
"[leftmargin=*]",
"Co-Reference Resolution: To support multi-turn interactions, it is sometimes necessary to use co-reference resolution techniques for effective retrieval. In Macaw, we identify all the co-references from the last request of user to the conversation history. The same co-reference resolution outputs can be used for different query generation components. This can be a generic or action-specific component.",
"Query Generation: This component generates a query based on the past user-system interactions. The query generation component may take advantage of co-reference resolution for query expansion or re-writing.",
"Retrieval Model: This is the core ranking component that retrieves documents or passages from a large collection. Macaw can retrieve documents from an arbitrary document collection using the Indri python interface BIBREF9, BIBREF10. We also provide the support for web search using the Bing Web Search API. Macaw also allows multi-stage document re-ranking.",
"Result Generation: The retrieved documents can be too long to be presented using some interfaces. Result generation is basically a post-processing step ran on the retrieved result list. In case of question answering, it can employ answer selection or generation techniques, such as machine reading comprehension models. For example, Macaw features the DrQA model BIBREF11 for question answering."
],
"extractive_spans": [
"Co-Reference Resolution",
"Query Generation",
"Retrieval Model",
"Result Generation"
],
"free_form_answer": "",
"highlighted_evidence": [
"Macaw has a modular design, with the goal of making it easy to configure and add new modules such as a different user interface or different retrieval module. The overall setup also follows a Model-View-Controller (MVC) like architecture.",
"These actions consist of the following components:\n\n[leftmargin=*]\n\nCo-Reference Resolution: To support multi-turn interactions, it is sometimes necessary to use co-reference resolution techniques for effective retrieval. In Macaw, we identify all the co-references from the last request of user to the conversation history. The same co-reference resolution outputs can be used for different query generation components. This can be a generic or action-specific component.\n\nQuery Generation: This component generates a query based on the past user-system interactions. The query generation component may take advantage of co-reference resolution for query expansion or re-writing.\n\nRetrieval Model: This is the core ranking component that retrieves documents or passages from a large collection. Macaw can retrieve documents from an arbitrary document collection using the Indri python interface BIBREF9, BIBREF10. We also provide the support for web search using the Bing Web Search API. Macaw also allows multi-stage document re-ranking.\n\nResult Generation: The retrieved documents can be too long to be presented using some interfaces. Result generation is basically a post-processing step ran on the retrieved result list. In case of question answering, it can employ answer selection or generation techniques, such as machine reading comprehension models. For example, Macaw features the DrQA model BIBREF11 for question answering."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity",
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no",
"no"
],
"question": [
"Does the paper provide any case studies to illustrate how one can use Macaw for CIS research?",
"What functionality does Macaw provide?",
"What is a wizard of oz setup?",
"What interface does Macaw currently have?",
"What modalities are supported by Macaw?",
"What are the different modules in Macaw?"
],
"question_id": [
"58ef2442450c392bfc55c4dc35f216542f5f2dbb",
"78a5546e87d4d88e3d9638a0a8cd0b7debf1f09d",
"375b281e7441547ba284068326dd834216e55c07",
"05c49b9f84772e6df41f530d86c1f7a1da6aa489",
"6ecb69360449bb9915ac73c0a816c8ac479cbbfc",
"68df324e5fa697baed25c761d0be4c528f7f5cf7"
],
"question_writer": [
"ecca0cede84b7af8a918852311d36346b07f0668",
"ecca0cede84b7af8a918852311d36346b07f0668",
"ecca0cede84b7af8a918852311d36346b07f0668",
"f7c76ad7ff9c8b54e8c397850358fa59258c6672",
"f7c76ad7ff9c8b54e8c397850358fa59258c6672",
"f7c76ad7ff9c8b54e8c397850358fa59258c6672"
],
"search_query": [
"information seeking",
"information seeking",
"information seeking",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: Example screenshots of the Macaw interface on mobile devices using Telegram bots. Macaw supports multimodal and multi-turn interactions.",
"Figure 2: The high-level architecture of Macaw for developing conversation information seeking systems.",
"Figure 3: The high-level architecture of Macaw for user studies. In this architecture, user interacts with a human intermediary who is an expert of the system and can interact with the system to address the user’s information need.",
"Figure 4: The overview of retrieval and question answering in Macaw."
],
"file": [
"1-Figure1-1.png",
"3-Figure2-1.png",
"3-Figure3-1.png",
"4-Figure4-1.png"
]
} | [
"What is a wizard of oz setup?"
] | [
[
"1912.08904-Macaw Architecture-1"
]
] | [
"a setup where the seeker interacts with a real conversational interface and the wizard, an intermediary, performs actions related to the seeker's message"
] | 121 |
1703.10344 | Automated News Suggestions for Populating Wikipedia Entity Pages | Wikipedia entity pages are a valuable source of information for direct consumption and for knowledge-base construction, update and maintenance. Facts in these entity pages are typically supported by references. Recent studies show that as much as 20\% of the references are from online news sources. However, many entity pages are incomplete even if relevant information is already available in existing news articles. Even for the already present references, there is often a delay between the news article publication time and the reference time. In this work, we therefore look at Wikipedia through the lens of news and propose a novel news-article suggestion task to improve news coverage in Wikipedia, and reduce the lag of newsworthy references. Our work finds direct application, as a precursor, to Wikipedia page generation and knowledge-base acceleration tasks that rely on relevant and high quality input sources. We propose a two-stage supervised approach for suggesting news articles to entity pages for a given state of Wikipedia. First, we suggest news articles to Wikipedia entities (article-entity placement) relying on a rich set of features which take into account the \emph{salience} and \emph{relative authority} of entities, and the \emph{novelty} of news articles to entity pages. Second, we determine the exact section in the entity page for the input article (article-section placement) guided by class-based section templates. We perform an extensive evaluation of our approach based on ground-truth data that is extracted from external references in Wikipedia. We achieve a high precision value of up to 93\% in the \emph{article-entity} suggestion stage and upto 84\% for the \emph{article-section placement}. Finally, we compare our approach against competitive baselines and show significant improvements. | {
"paragraphs": [
[
"Wikipedia is the largest source of open and collaboratively curated knowledge in the world. Introduced in 2001, it has evolved into a reference work with around 5m pages for the English Wikipedia alone. In addition, entities and event pages are updated quickly via collaborative editing and all edits are encouraged to include source citations, creating a knowledge base which aims at being both timely as well as authoritative. As a result, it has become the preferred source of information consumption about entities and events. Moreso, this knowledge is harvested and utilized in building knowledge bases like YAGO BIBREF0 and DBpedia BIBREF1 , and used in applications like text categorization BIBREF2 , entity disambiguation BIBREF3 , entity ranking BIBREF4 and distant supervision BIBREF5 , BIBREF6 .",
"However, not all Wikipedia pages referring to entities (entity pages) are comprehensive: relevant information can either be missing or added with a delay. Consider the city of New Orleans and the state of Odisha which were severely affected by cyclones Hurricane Katrina and Odisha Cyclone, respectively. While Katrina finds extensive mention in the entity page for New Orleans, Odisha Cyclone which has 5 times more human casualties (cf. Figure FIGREF2 ) is not mentioned in the page for Odisha. Arguably Katrina and New Orleans are more popular entities, but Odisha Cyclone was also reported extensively in national and international news outlets. This highlights the lack of important facts in trunk and long-tail entity pages, even in the presence of relevant sources. In addition, previous studies have shown that there is an inherent delay or lag when facts are added to entity pages BIBREF7 .",
"To remedy these problems, it is important to identify information sources that contain novel and salient facts to a given entity page. However, not all information sources are equal. The online presence of major news outlets is an authoritative source due to active editorial control and their articles are also a timely container of facts. In addition, their use is in line with current Wikipedia editing practice, as is shown in BIBREF7 that almost 20% of current citations in all entity pages are news articles. We therefore propose news suggestion as a novel task that enhances entity pages and reduces delay while keeping its pages authoritative.",
"Existing efforts to populate Wikipedia BIBREF8 start from an entity page and then generate candidate documents about this entity using an external search engine (and then post-process them). However, such an approach lacks in (a) reproducibility since rankings vary with time with obvious bias to recent news (b) maintainability since document acquisition for each entity has to be periodically performed. To this effect, our news suggestion considers a news article as input, and determines if it is valuable for Wikipedia. Specifically, given an input news article INLINEFORM0 and a state of Wikipedia, the news suggestion problem identifies the entities mentioned in INLINEFORM1 whose entity pages can improve upon suggesting INLINEFORM2 . Most of the works on knowledge base acceleration BIBREF9 , BIBREF10 , BIBREF11 , or Wikipedia page generation BIBREF8 rely on high quality input sources which are then utilized to extract textual facts for Wikipedia page population. In this work, we do not suggest snippets or paraphrases but rather entire articles which have a high potential importance for entity pages. These suggested news articles could be consequently used for extraction, summarization or population either manually or automatically – all of which rely on high quality and relevant input sources.",
"We identify four properties of good news recommendations: salience, relative authority, novelty and placement. First, we need to identify the most salient entities in a news article. This is done to avoid pollution of entity pages with only marginally related news. Second, we need to determine whether the news is important to the entity as only the most relevant news should be added to a precise reference work. To do this, we compute the relative authority of all entities in the news article: we call an entity more authoritative than another if it is more popular or noteworthy in the real world. Entities with very high authority have many news items associated with them and only the most relevant of these should be included in Wikipedia whereas for entities of lower authority the threshold for inclusion of a news article will be lower. Third, a good recommendation should be able to identify novel news by minimizing redundancy coming from multiple news articles. Finally, addition of facts is facilitated if the recommendations are fine-grained, i.e., recommendations are made on the section level rather than the page level (placement).",
"Approach and Contributions. We propose a two-stage news suggestion approach to entity pages. In the first stage, we determine whether a news article should be suggested for an entity, based on the entity's salience in the news article, its relative authority and the novelty of the article to the entity page. The second stage takes into account the class of the entity for which the news is suggested and constructs section templates from entities of the same class. The generation of such templates has the advantage of suggesting and expanding entity pages that do not have a complete section structure in Wikipedia, explicitly addressing long-tail and trunk entities. Afterwards, based on the constructed template our method determines the best fit for the news article with one of the sections.",
"We evaluate the proposed approach on a news corpus consisting of 351,982 articles crawled from the news external references in Wikipedia from 73,734 entity pages. Given the Wikipedia snapshot at a given year (in our case [2009-2014]), we suggest news articles that might be cited in the coming years. The existing news references in the entity pages along with their reference date act as our ground-truth to evaluate our approach. In summary, we make the following contributions."
],
[
"As we suggest a new problem there is no current work addressing exactly the same task. However, our task has similarities to Wikipedia page generation and knowledge base acceleration. In addition, we take inspiration from Natural Language Processing (NLP) methods for salience detection.",
"Wikipedia Page Generation is the problem of populating Wikipedia pages with content coming from external sources. Sauper and Barzilay BIBREF8 propose an approach for automatically generating whole entity pages for specific entity classes. The approach is trained on already-populated entity pages of a given class (e.g. `Diseases') by learning templates about the entity page structure (e.g. diseases have a treatment section). For a new entity page, first, they extract documents via Web search using the entity title and the section title as a query, for example `Lung Cancer'+`Treatment'. As already discussed in the introduction, this has problems with reproducibility and maintainability. However, their main focus is on identifying the best paragraphs extracted from the collected documents. They rank the paragraphs via an optimized supervised perceptron model for finding the most representative paragraph that is the least similar to paragraphs in other sections. This paragraph is then included in the newly generated entity page. Taneva and Weikum BIBREF12 propose an approach that constructs short summaries for the long tail. The summaries are called `gems' and the size of a `gem' can be user defined. They focus on generating summaries that are novel and diverse. However, they do not consider any structure of entities, which is present in Wikipedia.",
"In contrast to BIBREF8 and BIBREF12 , we actually focus on suggesting entire documents to Wikipedia entity pages. These are authoritative documents (news), which are highly relevant for the entity, novel for the entity and in which the entity is salient. Whereas relevance in Sauper and Barzilay is implicitly computed by web page ranking we solve that problem by looking at relative authority and salience of an entity, using the news article and entity page only. As Sauper and Barzilay concentrate on empty entity pages, the problem of novelty of their content is not an issue in their work whereas it is in our case which focuses more on updating entities. Updating entities will be more and more important the bigger an existing reference work is. Both the approaches in BIBREF8 and BIBREF12 (finding paragraphs and summarization) could then be used to process the documents we suggest further. Our concentration on news is also novel.",
"Knowledge Base Acceleration. In this task, given specific information extraction templates, a given corpus is analyzed in order to find worthwhile mentions of an entity or snippets that match the templates. Balog BIBREF9 , BIBREF10 recommend news citations for an entity. Prior to that, the news articles are classified for their appropriateness for an entity, where as features for the classification task they use entity, document, entity-document and temporal features. The best performing features are those that measure similarity between an entity and the news document. West et al. BIBREF13 consider the problem of knowledge base completion, through question answering and complete missing facts in Freebase based on templates, i.e. Frank_Zappa bornIn Baltymore, Maryland.",
"In contrast, we do not extract facts for pre-defined templates but rather suggest news articles based on their relevance to an entity. In cases of long-tail entities, we can suggest to add a novel section through our abstraction and generation of section templates at entity class level.",
"Entity Salience. Determining which entities are prominent or salient in a given text has a long history in NLP, sparked by the linguistic theory of Centering BIBREF14 . Salience has been used in pronoun and co-reference resolution BIBREF15 , or to predict which entities will be included in an abstract of an article BIBREF11 . Frequent features to measure salience include the frequency of an entity in a document, positioning of an entity, grammatical function or internal entity structure (POS tags, head nouns etc.). These approaches are not currently aimed at knowledge base generation or Wikipedia coverage extension but we postulate that an entity's salience in a news article is a prerequisite to the news article being relevant enough to be included in an entity page. We therefore use the salience features in BIBREF11 as part of our model. However, these features are document-internal — we will show that they are not sufficient to predict news inclusion into an entity page and add features of entity authority, news authority and novelty that measure the relations between several entities, between entity and news article as well as between several competing news articles."
],
[
"We are interested in named entities mentioned in documents. An entity INLINEFORM0 can be identified by a canonical name, and can be mentioned differently in text via different surface forms. We canonicalize these mentions to entity pages in Wikipedia, a method typically known as entity linking. We denote the set of canonicalized entities extracted and linked from a news article INLINEFORM1 as INLINEFORM2 . For example, in Figure FIGREF7 , entities are canonicalized into Wikipedia entity pages (e.g. Odisha is canonicalized to the corresponding article). For a collection of news articles INLINEFORM3 , we further denote the resulting set of entities by INLINEFORM4 .",
"Information in an entity page is organized into sections and evolves with time as more content is added. We refer to the state of Wikipedia at a time INLINEFORM0 as INLINEFORM1 and the set of sections for an entity page INLINEFORM2 as its entity profile INLINEFORM3 . Unlike news articles, text in Wikipedia could be explicitly linked to entity pages through anchors. The set of entities explicitly referred in text from section INLINEFORM4 is defined as INLINEFORM5 . Furthermore, Wikipedia induces a category structure over its entities, which is exploited by knowledge bases like YAGO (e.g. Barack_Obama isA Person). Consequently, each entity page belongs to one or more entity categories or classes INLINEFORM6 . Now we can define our news suggestion problem below:",
"Definition 1 (News Suggestion Problem) Given a set of news articles INLINEFORM0 and set of Wikipedia entity pages INLINEFORM1 (from INLINEFORM2 ) we intend to suggest a news article INLINEFORM3 published at time INLINEFORM4 to entity page INLINEFORM5 and additionally to the most relevant section for the entity page INLINEFORM6 ."
],
[
"We approach the news suggestion problem by decomposing it into two tasks:",
"AEP: Article–Entity placement",
"ASP: Article–Section placement",
"In this first step, for a given entity-news pair INLINEFORM0 , we determine whether the given news article INLINEFORM1 should be suggested (we will refer to this as `relevant') to entity INLINEFORM2 . To generate such INLINEFORM3 pairs, we perform the entity linking process, INLINEFORM4 , for INLINEFORM5 .",
"The article–entity placement task (described in detail in Section SECREF16 ) for a pair INLINEFORM0 outputs a binary label (either `non-relevant' or `relevant') and is formalized in Equation EQREF14 . DISPLAYFORM0 ",
"In the second step, we take into account all `relevant' pairs INLINEFORM0 and find the correct section for article INLINEFORM1 in entity INLINEFORM2 , respectively its profile INLINEFORM3 (see Section SECREF30 ). The article–section placement task, determines the correct section for the triple INLINEFORM4 , and is formalized in Equation EQREF15 . DISPLAYFORM0 ",
"In the subsequent sections we describe in details how we approach the two tasks for suggesting news articles to entity pages."
],
[
"In this section, we provide an overview of the news suggestion approach to Wikipedia entity pages (see Figure FIGREF7 ). The approach is split into two tasks: (i) article-entity (AEP) and (ii) article-section (ASP) placement. For a Wikipedia snapshot INLINEFORM0 and a news corpus INLINEFORM1 , we first determine which news articles should be suggested to an entity INLINEFORM2 . We will denote our approach for AEP by INLINEFORM3 . Finally, we determine the most appropriate section for the ASP task and we denote our approach with INLINEFORM4 .",
"In the following, we describe the process of learning the functions INLINEFORM0 and INLINEFORM1 . We introduce features for the learning process, which encode information regarding the entity salience, relative authority and novelty in the case of AEP task. For the ASP task, we measure the overall fit of an article to the entity sections, with the entity being an input from AEP task. Additionally, considering that the entity profiles INLINEFORM2 are incomplete, in the case of a missing section we suggest and expand the entity profiles based on section templates generated from entities of the same class INLINEFORM3 (see Section UID34 )."
],
[
"In this step we learn the function INLINEFORM0 to correctly determine whether INLINEFORM1 should be suggested for INLINEFORM2 , basically a binary classification model (0=`non-relevant' and 1=`relevant'). Note that we are mainly interested in finding the relevant pairs in this task. For every news article, the number of disambiguated entities is around 30 (but INLINEFORM3 is suggested for only two of them on average). Therefore, the distribution of `non-relevant' and `relevant' pairs is skewed towards the earlier, and by simply choosing the `non-relevant' label we can achieve a high accuracy for INLINEFORM4 . Finding the relevant pairs is therefore a considerable challenge.",
"An article INLINEFORM0 is suggested to INLINEFORM1 by our function INLINEFORM2 if it fulfills the following properties. The entity INLINEFORM3 is salient in INLINEFORM4 (a central concept), therefore ensuring that INLINEFORM5 is about INLINEFORM6 and that INLINEFORM7 is important for INLINEFORM8 . Next, given the fact there might be many articles in which INLINEFORM9 is salient, we also look at the reverse property, namely whether INLINEFORM10 is important for INLINEFORM11 . We do this by comparing the authority of INLINEFORM12 (which is a measure of popularity of an entity, such as its frequency of mention in a whole corpus) with the authority of its co-occurring entities in INLINEFORM13 , leading to a feature we call relative authority. The intuition is that for an entity that has overall lower authority than its co-occurring entities, a news article is more easily of importance. Finally, if the article we are about to suggest is already covered in the entity profile INLINEFORM14 , we do not wish to suggest redundant information, hence the novelty. Therefore, the learning objective of INLINEFORM15 should fulfill the following properties. Table TABREF21 shows a summary of the computed features for INLINEFORM16 .",
"Salience: entity INLINEFORM0 should be a salient entity in news article INLINEFORM1 ",
"Relative Authority: the set of entities INLINEFORM0 with which INLINEFORM1 co-occurs should have higher authority than INLINEFORM2 , making INLINEFORM3 important for INLINEFORM4 ",
"Novelty: news article INLINEFORM0 should provide novel information for entity INLINEFORM1 taking into account its profile INLINEFORM2 ",
"Baseline Features. As discussed in Section SECREF2 , a variety of features that measure salience of an entity in text are available from the NLP community. We reimplemented the ones in Dunietz and Gillick BIBREF11 . This includes a variety of features, e.g. positional features, occurrence frequency and the internal POS structure of the entity and the sentence it occurs in. Table 2 in BIBREF11 gives details.",
"Relative Entity Frequency. Although frequency of mention and positional features play some role in baseline features, their interaction is not modeled by a single feature nor do the positional features encode more than sentence position. We therefore suggest a novel feature called relative entity frequency, INLINEFORM0 , that has three properties.: (i) It rewards entities for occurring throughout the text instead of only in some parts of the text, measured by the number of paragraphs it occurs in (ii) it rewards entities that occur more frequently in the opening paragraphs of an article as we model INLINEFORM1 as an exponential decay function. The decay corresponds to the positional index of the news paragraph. This is inspired by the news-specific discourse structure that tends to give short summaries of the most important facts and entities in the opening paragraphs. (iii) it compares entity frequency to the frequency of its co-occurring mentions as the weight of an entity appearing in a specific paragraph, normalized by the sum of the frequencies of other entities in INLINEFORM2 . DISPLAYFORM0 ",
"where, INLINEFORM0 represents a news paragraph from INLINEFORM1 , and with INLINEFORM2 we indicate the set of all paragraphs in INLINEFORM3 . The frequency of INLINEFORM4 in a paragraph INLINEFORM5 is denoted by INLINEFORM6 . With INLINEFORM7 and INLINEFORM8 we indicate the number of paragraphs in which entity INLINEFORM9 occurs, and the total number of paragraphs, respectively.",
"Relative Authority. In this case, we consider the comparative relevance of the news article to the different entities occurring in it. As an example, let us consider the meeting of the Sudanese bishop Elias Taban with Hillary Clinton. Both entities are salient for the meeting. However, in Taban's Wikipedia page, this meeting is discussed prominently with a corresponding news reference, whereas in Hillary Clinton's Wikipedia page it is not reported at all. We believe this is not just an omission in Clinton's page but mirrors the fact that for the lesser known Taban the meeting is big news whereas for the more famous Clinton these kind of meetings are a regular occurrence, not all of which can be reported in what is supposed to be a selection of the most important events for her. Therefore, if two entities co-occur, the news is more relevant for the entity with the lower a priori authority.",
"The a priori authority of an entity (denoted by INLINEFORM0 ) can be measured in several ways. We opt for two approaches: (i) probability of entity INLINEFORM1 occurring in the corpus INLINEFORM2 , and (ii) authority assessed through centrality measures like PageRank BIBREF16 . For the second case we construct the graph INLINEFORM3 consisting of entities in INLINEFORM4 and news articles in INLINEFORM5 as vertices. The edges are established between INLINEFORM6 and entities in INLINEFORM7 , that is INLINEFORM8 , and the out-links from INLINEFORM9 , that is INLINEFORM10 (arrows present the edge direction).",
"Starting from a priori authority, we proceed to relative authority by comparing the a priori authority of co-occurring entities in INLINEFORM0 . We define the relative authority of INLINEFORM1 as the proportion of co-occurring entities INLINEFORM2 that have a higher a priori authority than INLINEFORM3 (see Equation EQREF28 . DISPLAYFORM0 ",
"As we might run the danger of not suggesting any news articles for entities with very high a priori authority (such as Clinton) due to the strict inequality constraint, we can relax the constraint such that the authority of co-occurring entities is above a certain threshold.",
"News Domain Authority. The news domain authority addresses two main aspects. Firstly, if bundled together with the relative authority feature, we can ensure that dependent on the entity authority, we suggest news from authoritative sources, hence ensuring the quality of suggested articles. The second aspect is in a news streaming scenario where multiple news domains report the same event — ideally only articles coming from authoritative sources would fulfill the conditions for the news suggestion task.",
"The news domain authority is computed based on the number of news references in Wikipedia coming from a particular news domain INLINEFORM0 . This represents a simple prior that a news article INLINEFORM1 is from domain INLINEFORM2 in corpus INLINEFORM3 . We extract the domains by taking the base URLs from the news article URLs.",
"An important feature when suggesting an article INLINEFORM0 to an entity INLINEFORM1 is the novelty of INLINEFORM2 w.r.t the already existing entity profile INLINEFORM3 . Studies BIBREF17 have shown that on comparable collections to ours (TREC GOV2) the number of duplicates can go up to INLINEFORM4 . This figure is likely higher for major events concerning highly authoritative entities on which all news media will report.",
"Given an entity INLINEFORM0 and the already added news references INLINEFORM1 up to year INLINEFORM2 , the novelty of INLINEFORM3 at year INLINEFORM4 is measured by the KL divergence between the language model of INLINEFORM5 and articles in INLINEFORM6 . We combine this measure with the entity overlap of INLINEFORM7 and INLINEFORM8 . The novelty value of INLINEFORM9 is given by the minimal divergence value. Low scores indicate low novelty for the entity profile INLINEFORM10 .",
"N(n|e) = n'Nt-1{DKL((n') || (n)) + DKL((N) || (n)).",
"DKL((n') || (n)). (1-) jaccard((n'),(n))} where INLINEFORM0 is the KL divergence of the language models ( INLINEFORM1 and INLINEFORM2 ), whereas INLINEFORM3 is the mixing weight ( INLINEFORM4 ) between the language models INLINEFORM5 and the entity overlap in INLINEFORM6 and INLINEFORM7 .",
"Here we introduce the evaluation setup and analyze the results for the article–entity (AEP) placement task. We only report the evaluation metrics for the `relevant' news-entity pairs. A detailed explanation on why we focus on the `relevant' pairs is provided in Section SECREF16 .",
"Baselines. We consider the following baselines for this task.",
"B1. The first baseline uses only the salience-based features by Dunietz and Gillick BIBREF11 .",
"B2. The second baseline assigns the value relevant to a pair INLINEFORM0 , if and only if INLINEFORM1 appears in the title of INLINEFORM2 .",
"Learning Models. We use Random Forests (RF) BIBREF23 . We learn the RF on all computed features in Table TABREF21 . The optimization on RF is done by splitting the feature space into multiple trees that are considered as ensemble classifiers. Consequently, for each classifier it computes the margin function as a measure of the average count of predicting the correct class in contrast to any other class. The higher the margin score the more robust the model.",
"Metrics. We compute precision P, recall R and F1 score for the relevant class. For example, precision is the number of news-entity pairs we correctly labeled as relevant compared to our ground truth divided by the number of all news-entity pairs we labeled as relevant.",
"The following results measure the effectiveness of our approach in three main aspects: (i) overall performance of INLINEFORM0 and comparison to baselines, (ii) robustness across the years, and (iii) optimal model for the AEP placement task.",
"Performance. Figure FIGREF55 shows the results for the years 2009 and 2013, where we optimized the learning objective with instances from year INLINEFORM0 and evaluate on the years INLINEFORM1 (see Section SECREF46 ). The results show the precision–recall curve. The red curve shows baseline B1 BIBREF11 , and the blue one shows the performance of INLINEFORM2 . The curve shows for varying confidence scores (high to low) the precision on labeling the pair INLINEFORM3 as `relevant'. In addition, at each confidence score we can compute the corresponding recall for the `relevant' label. For high confidence scores on labeling the news-entity pairs, the baseline B1 achieves on average a precision score of P=0.50, while INLINEFORM4 has P=0.93. We note that with the drop in the confidence score the corresponding precision and recall values drop too, and the overall F1 score for B1 is around F1=0.2, in contrast we achieve an average score of F1=0.67.",
"It is evident from Figure FIGREF55 that for the years 2009 and 2013, INLINEFORM0 significantly outperforms the baseline B1. We measure the significance through the t-test statistic and get a p-value of INLINEFORM1 . The improvement we achieve over B1 in absolute numbers, INLINEFORM2 P=+0.5 in terms of precision for the years between 2009 and 2014, and a similar improvement in terms of F1 score. The improvement for recall is INLINEFORM3 R=+0.4. The relative improvement over B1 for P and F1 is almost 1.8 times better, while for recall we are 3.5 times better. In Table TABREF58 we show the overall scores for the evaluation metrics for B1 and INLINEFORM4 . Finally, for B2 we achieve much poorer performance, with average scores of P=0.21, R=0.20 and F1=0.21.",
"Robustness. In Table TABREF58 , we show the overall performance for the years between 2009 and 2013. An interesting observation we make is that we have a very robust performance and the results are stable across the years. If we consider the experimental setup, where for year INLINEFORM0 we optimize the learning objective with only 74k training instances and evaluate on the rest of the instances, it achieves a very good performance. We predict with F1=0.68 the remaining 469k instances for the years INLINEFORM1 .",
"The results are particularly promising considering the fact that the distribution between our two classes is highly skewed. On average the number of `relevant' pairs account for only around INLINEFORM0 of all pairs. A good indicator to support such a statement is the kappa (denoted by INLINEFORM1 ) statistic. INLINEFORM2 measures agreement between the algorithm and the gold standard on both labels while correcting for chance agreement (often expected due to extreme distributions). The INLINEFORM3 scores for B1 across the years is on average INLINEFORM4 , while for INLINEFORM5 we achieve a score of INLINEFORM6 (the maximum score for INLINEFORM7 is 1).",
"In Figure FIGREF60 we show the impact of the individual feature groups that contribute to the superior performance in comparison to the baselines. Relative entity frequency from the salience feature, models the entity salience as an exponentially decaying function based on the positional index of the paragraph where the entity appears. The performance of INLINEFORM0 with relative entity frequency from the salience feature group is close to that of all the features combined. The authority and novelty features account to a further improvement in terms of precision, by adding roughly a 7%-10% increase. However, if both feature groups are considered separately, they significantly outperform the baseline B1."
],
[
"We model the ASP placement task as a successor of the AEP task. For all the `relevant' news entity pairs, the task is to determine the correct entity section. Each section in a Wikipedia entity page represents a different topic. For example, Barack Obama has the sections `Early Life', `Presidency', `Family and Personal Life' etc. However, many entity pages have an incomplete section structure. Incomplete or missing sections are due to two Wikipedia properties. First, long-tail entities miss information and sections due to their lack of popularity. Second, for all entities whether popular or not, certain sections might occur for the first time due to real world developments. As an example, the entity Germanwings did not have an `Accidents' section before this year's disaster, which was the first in the history of the airline.",
"Even if sections are missing for certain entities, similar sections usually occur in other entities of the same class (e.g. other airlines had disasters and therefore their pages have an accidents section). We exploit such homogeneity of section structure and construct templates that we use to expand entity profiles. The learning objective for INLINEFORM0 takes into account the following properties:",
"Section-templates: account for incomplete section structure for an entity profile INLINEFORM0 by constructing section templates INLINEFORM1 from an entity class INLINEFORM2 ",
"Overall fit: measures the overall fit of a news article to sections in the section templates INLINEFORM0 ",
"Given the fact that entity profiles are often incomplete, we construct section templates for every entity class. We group entities based on their class INLINEFORM0 and construct section templates INLINEFORM1 . For different entity classes, e.g. Person and Location, the section structure and the information represented in those section varies heavily. Therefore, the section templates are with respect to the individual classes in our experimental setup (see Figure FIGREF42 ). DISPLAYFORM0 ",
"Generating section templates has two main advantages. Firstly, by considering class-based profiles, we can overcome the problem of incomplete individual entity profiles and thereby are able to suggest news articles to sections that do not yet exist in a specific entity INLINEFORM0 . The second advantage is that we are able to canonicalize the sections, i.e. `Early Life' and `Early Life and Childhood' would be treated similarly.",
"To generate the section template INLINEFORM0 , we extract all sections from entities of a given type INLINEFORM1 at year INLINEFORM2 . Next, we cluster the entity sections, based on an extended version of k–means clustering BIBREF18 , namely x–means clustering introduced in Pelleg et al. which estimates the number of clusters efficiently BIBREF19 . As a similarity metric we use the cosine similarity computed based on the tf–idf models of the sections. Using the x–means algorithm we overcome the requirement to provide the number of clusters k beforehand. x–means extends the k–means algorithm, such that a user only specifies a range [ INLINEFORM3 , INLINEFORM4 ] that the number of clusters may reasonably lie in.",
"The learning objective of INLINEFORM0 is to determine the overall fit of a news article INLINEFORM1 to one of the sections in a given section template INLINEFORM2 . The template is pre-determined by the class of the entity for which the news is suggested as relevant by INLINEFORM3 . In all cases, we measure how well INLINEFORM4 fits each of the sections INLINEFORM5 as well as the specific entity section INLINEFORM6 . The section profiles in INLINEFORM7 represent the aggregated entity profiles from all entities of class INLINEFORM8 at year INLINEFORM9 .",
"To learn INLINEFORM0 we rely on a variety of features that consider several similarity aspects as shown in Table TABREF31 . For the sake of simplicity we do not make the distinction in Table TABREF31 between the individual entity section and class-based section similarities, INLINEFORM1 and INLINEFORM2 , respectively. Bear in mind that an entity section INLINEFORM3 might be present at year INLINEFORM4 but not at year INLINEFORM5 (see for more details the discussion on entity profile expansion in Section UID69 ).",
"Topic. We use topic similarities to ensure (i) that the content of INLINEFORM0 fits topic-wise with a specific section text and (ii) that it has a similar topic to previously referred news articles in that section. In a pre-processing stage we compute the topic models for the news articles, entity sections INLINEFORM1 and the aggregated class-based sections in INLINEFORM2 . The topic models are computed using LDA BIBREF20 . We only computed a single topic per article/section as we are only interested in topic term overlaps between article and sections. We distinguish two main features: the first feature measures the overlap of topic terms between INLINEFORM3 and the entity section INLINEFORM4 and INLINEFORM5 , and the second feature measures the overlap of the topic model of INLINEFORM6 against referred news articles in INLINEFORM7 at time INLINEFORM8 .",
"Syntactic. These features represent a mechanism for conveying the importance of a specific text snippet, solely based on the frequency of specific POS tags (i.e. NNP, CD etc.), as commonly used in text summarization tasks. Following the same intuition as in BIBREF8 , we weigh the importance of articles by the count of specific POS tags. We expect that for different sections, the importance of POS tags will vary. We measure the similarity of POS tags in a news article against the section text. Additionally, we consider bi-gram and tri-gram POS tag overlap. This exploits similarity in syntactical patterns between the news and section text.",
"Lexical. As lexical features, we measure the similarity of INLINEFORM0 against the entity section text INLINEFORM1 and the aggregate section text INLINEFORM2 . Further, we distinguish between the overall similarity of INLINEFORM3 and that of the different news paragraphs ( INLINEFORM4 which denotes the paragraphs of INLINEFORM5 up to the 5th paragraph). A higher similarity on the first paragraphs represents a more confident indicator that INLINEFORM6 should be suggested to a specific section INLINEFORM7 . We measure the similarity based on two metrics: (i) the KL-divergence between the computed language models and (ii) cosine similarity of the corresponding paragraph text INLINEFORM8 and section text.",
"Entity-based. Another feature set we consider is the overlap of named entities and their corresponding entity classes. For different entity sections, we expect to find a particular set of entity classes that will correlate with the section, e.g. `Early Life' contains mostly entities related to family, school, universities etc.",
"Frequency. Finally, we gather statistics about the number of entities, paragraphs, news article length, top– INLINEFORM0 entities and entity classes, and the frequency of different POS tags. Here we try to capture patterns of articles that are usually cited in specific sections."
],
[
"In this section we outline the evaluation plan to verify the effectiveness of our learning approaches. To evaluate the news suggestion problem we are faced with two challenges.",
"What comprises the ground truth for such a task ?",
"How do we construct training and test splits given that entity pages consists of text added at different points in time ?",
"Consider the ground truth challenge. Evaluating if an arbitrary news article should be included in Wikipedia is both subjective and difficult for a human if she is not an expert. An invasive approach, which was proposed by Barzilay and Sauper BIBREF8 , adds content directly to Wikipedia and expects the editors or other users to redact irrelevant content over a period of time. The limitations of such an evaluation technique is that content added to long-tail entities might not be evaluated by informed users or editors in the experiment time frame. It is hard to estimate how much time the added content should be left on the entity page. A more non-invasive approach could involve crowdsourcing of entity and news article pairs in an IR style relevance assessment setup. The problem of such an approach is again finding knowledgeable users or experts for long-tail entities. Thus the notion of relevance of a news recommendation is challenging to evaluate in a crowd setup.",
"We take a slightly different approach by making an assumption that the news articles already present in Wikipedia entity pages are relevant. To this extent, we extract a dataset comprising of all news articles referenced in entity pages (details in Section SECREF40 ). At the expense of not evaluating the space comprising of news articles absent in Wikipedia, we succeed in (i) avoiding restrictive assumptions about the quality of human judgments, (ii) being invasive and polluting Wikipedia, and (iii) deriving a reusable test bed for quicker experimentation.",
"The second challenge of construction of training and test set separation is slightly easier and is addressed in Section SECREF46 ."
],
[
"The datasets we use for our experimental evaluation are directly extracted from the Wikipedia entity pages and their revision history. The generated data represents one of the contributions of our paper. The datasets are the following:",
"Entity Classes. We focus on a manually predetermined set of entity classes for which we expect to have news coverage. The number of analyzed entity classes is 27, including INLINEFORM0 entities with at least one news reference. The entity classes were selected from the DBpedia class ontology. Figure FIGREF42 shows the number of entities per class for the years (2009-2014).",
"News Articles. We extract all news references from the collected Wikipedia entity pages. The extracted news references are associated with the sections in which they appear. In total there were INLINEFORM0 news references, and after crawling we end up with INLINEFORM1 successfully crawled news articles. The details of the news article distribution, and the number of entities and sections from which they are referred are shown in Table TABREF44 .",
"Article-Entity Ground-truth. The dataset comprises of the news and entity pairs INLINEFORM0 . News-entity pairs are relevant if the news article is referenced in the entity page. Non-relevant pairs (i.e. negative training examples) consist of news articles that contain an entity but are not referenced in that entity's page. If a news article INLINEFORM1 is referred from INLINEFORM2 at year INLINEFORM3 , the features are computed taking into account the entity profiles at year INLINEFORM4 .",
"Article-Section Ground-truth. The dataset consists of the triple INLINEFORM0 , where INLINEFORM1 , where we assume that INLINEFORM2 has already been determined as relevant. We therefore have a multi-class classification problem where we need to determine the section of INLINEFORM3 where INLINEFORM4 is cited. Similar to the article-entity ground truth, here too the features compute the similarity between INLINEFORM5 , INLINEFORM6 and INLINEFORM7 ."
],
[
"We POS-tag the news articles and entity profiles INLINEFORM0 with the Stanford tagger BIBREF21 . For entity linking the news articles, we use TagMe! BIBREF22 with a confidence score of 0.3. On a manual inspection of a random sample of 1000 disambiguated entities, the accuracy is above 0.9. On average, the number of entities per news article is approximately 30. For entity linking the entity profiles, we simply follow the anchor text that refers to Wikipedia entities."
],
[
"We evaluate the generated supervised models for the two tasks, AEP and ASP, by splitting the train and testing instances. It is important to note that for the pairs INLINEFORM0 and the triple INLINEFORM1 , the news article INLINEFORM2 is referenced at time INLINEFORM3 by entity INLINEFORM4 , while the features take into account the entity profile at time INLINEFORM5 . This avoids any `overlapping' content between the news article and the entity page, which could affect the learning task of the functions INLINEFORM6 and INLINEFORM7 . Table TABREF47 shows the statistics of train and test instances. We learn the functions at year INLINEFORM8 and test on instances for the years greater than INLINEFORM9 . Please note that we do not show the performance for year 2014 as we do not have data for 2015 for evaluation."
],
[
"Here we show the evaluation setup for ASP task and discuss the results with a focus on three main aspects, (i) the overall performance across the years, (ii) the entity class specific performance, and (iii) the impact on entity profile expansion by suggesting missing sections to entities based on the pre-computed templates.",
"Baselines. To the best of our knowledge, we are not aware of any comparable approach for this task. Therefore, the baselines we consider are the following:",
"S1: Pick the section from template INLINEFORM0 with the highest lexical similarity to INLINEFORM1 : S1 INLINEFORM2 ",
"S2: Place the news into the most frequent section in INLINEFORM0 ",
"Learning Models. We use Random Forests (RF) BIBREF23 and Support Vector Machines (SVM) BIBREF24 . The models are optimized taking into account the features in Table TABREF31 . In contrast to the AEP task, here the scale of the number of instances allows us to learn the SVM models. The SVM model is optimized using the INLINEFORM0 loss function and uses the Gaussian kernels.",
"Metrics. We compute precision P as the ratio of news for which we pick a section INLINEFORM0 from INLINEFORM1 and INLINEFORM2 conforms to the one in our ground-truth (see Section SECREF40 ). The definition of recall R and F1 score follows from that of precision.",
"Figure FIGREF66 shows the overall performance and a comparison of our approach (when INLINEFORM0 is optimized using SVM) against the best performing baseline S2. With the increase in the number of training instances for the ASP task the performance is a monotonically non-decreasing function. For the year 2009, we optimize the learning objective of INLINEFORM1 with around 8% of the total instances, and evaluate on the rest. The performance on average is around P=0.66 across all classes. Even though for many classes the performance is already stable (as we will see in the next section), for some classes we improve further. If we take into account the years between 2010 and 2012, we have an increase of INLINEFORM2 P=0.17, with around 70% of instances used for training and the remainder for evaluation. For the remaining years the total improvement is INLINEFORM3 P=0.18 in contrast to the performance at year 2009.",
"On the other hand, the baseline S1 has an average precision of P=0.12. The performance across the years varies slightly, with the year 2011 having the highest average precision of P=0.13. Always picking the most frequent section as in S2, as shown in Figure FIGREF66 , results in an average precision of P=0.17, with a uniform distribution across the years.",
"Here we show the performance of INLINEFORM0 decomposed for the different entity classes. Specifically we analyze the 27 classes in Figure FIGREF42 . In Table TABREF68 , we show the results for a range of years (we omit showing all years due to space constraints). For illustration purposes only, we group them into four main classes ( INLINEFORM1 Person, Organization, Location, Event INLINEFORM2 ) and into the specific sub-classes shown in the second column in Table TABREF68 . For instance, the entity classes OfficeHolder and Politician are aggregated into Person–Politics.",
"It is evident that in the first year the performance is lower in contrast to the later years. This is due to the fact that as we proceed, we can better generalize and accurately determine the correct fit of an article INLINEFORM0 into one of the sections from the pre-computed templates INLINEFORM1 . The results are already stable for the year range INLINEFORM2 . For a few Person sub-classes, e.g. Politics, Entertainment, we achieve an F1 score above 0.9. These additionally represent classes with a sufficient number of training instances for the years INLINEFORM3 . The lowest F1 score is for the Criminal and Television classes. However, this is directly correlated with the insufficient number of instances.",
"The baseline approaches for the ASP task perform poorly. S1, based on lexical similarity, has a varying performance for different entity classes. The best performance is achieved for the class Person – Politics, with P=0.43. This highlights the importance of our feature choice and that the ASP cannot be considered as a linear function, where the maximum similarity yields the best results. For different entity classes different features and combination of features is necessary. Considering that S2 is the overall best performing baseline, through our approach INLINEFORM0 we have a significant improvement of over INLINEFORM1 P=+0.64.",
"The models we learn are very robust and obtain high accuracy, fulfilling our pre-condition for accurate news suggestions into the entity sections. We measure the robustness of INLINEFORM0 through the INLINEFORM1 statistic. In this case, we have a model with roughly 10 labels (corresponding to the number of sections in a template INLINEFORM2 ). The score we achieve shows that our model predicts with high confidence with INLINEFORM3 .",
"The last analysis is the impact we have on expanding entity profiles INLINEFORM0 with new sections. Figure FIGREF70 shows the ratio of sections for which we correctly suggest an article INLINEFORM1 to the right section in the section template INLINEFORM2 . The ratio here corresponds to sections that are not present in the entity profile at year INLINEFORM3 , that is INLINEFORM4 . However, given the generated templates INLINEFORM5 , we can expand the entity profile INLINEFORM6 with a new section at time INLINEFORM7 . In details, in the absence of a section at time INLINEFORM8 , our model trains well on similar sections from the section template INLINEFORM9 , hence we can predict accurately the section and in this case suggest its addition to the entity profile. With time, it is obvious that the expansion rate decreases at later years as the entity profiles become more `complete'.",
"This is particularly interesting for expanding the entity profiles of long-tail entities as well as updating entities with real-world emerging events that are added constantly. In many cases such missing sections are present at one of the entities of the respective entity class INLINEFORM0 . An obvious case is the example taken in Section SECREF16 , where the `Accidents' is rather common for entities of type Airline. However, it is non-existent for some specific entity instances, i.e Germanwings airline.",
"Through our ASP approach INLINEFORM0 , we are able to expand both long-tail and trunk entities. We distinguish between the two types of entities by simply measuring their section text length. The real distribution in the ground truth (see Section SECREF40 ) is 27% and 73% are long-tail and trunk entities, respectively. We are able to expand the entity profiles for both cases and all entity classes without a significant difference, with the only exception being the class Creative Work, where we expand significantly more trunk entities."
],
[
"In this work, we have proposed an automated approach for the novel task of suggesting news articles to Wikipedia entity pages to facilitate Wikipedia updating. The process consists of two stages. In the first stage, article–entity placement, we suggest news articles to entity pages by considering three main factors, such as entity salience in a news article, relative authority and novelty of news articles for an entity page. In the second stage, article–section placement, we determine the best fitting section in an entity page. Here, we remedy the problem of incomplete entity section profiles by constructing section templates for specific entity classes. This allows us to add missing sections to entity pages. We carry out an extensive experimental evaluation on 351,983 news articles and 73,734 entities coming from 27 distinct entity classes. For the first stage, we achieve an overall performance with P=0.93, R=0.514 and F1=0.676, outperforming our baseline competitors significantly. For the second stage, we show that we can learn incrementally to determine the correct section for a news article based on section templates. The overall performance across different classes is P=0.844, R=0.885 and F1=0.860.",
"In the future, we will enhance our work by extracting facts from the suggested news articles. Results suggest that the news content cited in entity pages comes from the first paragraphs. However, challenging task such as the canonicalization and chronological ordering of facts, still remain."
]
],
"section_name": [
"Introduction",
"Related Work",
"Terminology and Problem Definition",
"Approach Overview",
"News Article Suggestion",
"Article–Entity Placement",
"Article–Section Placement",
"Evaluation Plan",
"Datasets",
"Data Pre-Processing",
"Train and Testing Evaluation Setup",
"Article-Section Placement",
"Conclusion and Future Work"
]
} | {
"answers": [
{
"annotation_id": [
"4ecd3926a575b817c2d1d3612cd6f3ebe838e8f8",
"fc2928695ad000dc1c475f74f22d29d01c01543d"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Figure 2: News suggestion approach overview.",
"However, not all Wikipedia pages referring to entities (entity pages) are comprehensive: relevant information can either be missing or added with a delay. Consider the city of New Orleans and the state of Odisha which were severely affected by cyclones Hurricane Katrina and Odisha Cyclone, respectively. While Katrina finds extensive mention in the entity page for New Orleans, Odisha Cyclone which has 5 times more human casualties (cf. Figure FIGREF2 ) is not mentioned in the page for Odisha. Arguably Katrina and New Orleans are more popular entities, but Odisha Cyclone was also reported extensively in national and international news outlets. This highlights the lack of important facts in trunk and long-tail entity pages, even in the presence of relevant sources. In addition, previous studies have shown that there is an inherent delay or lag when facts are added to entity pages BIBREF7 .",
"FLOAT SELECTED: Figure 1: Comparing how cyclones are reported in Wikipedia entity pages."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"FLOAT SELECTED: Figure 2: News suggestion approach overview.",
"While Katrina finds extensive mention in the entity page for New Orleans, Odisha Cyclone which has 5 times more human casualties (cf. Figure FIGREF2 ) is not mentioned in the page for Odisha. ",
"FLOAT SELECTED: Figure 1: Comparing how cyclones are reported in Wikipedia entity pages."
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"We evaluate the proposed approach on a news corpus consisting of 351,982 articles crawled from the news external references in Wikipedia from 73,734 entity pages. Given the Wikipedia snapshot at a given year (in our case [2009-2014]), we suggest news articles that might be cited in the coming years. The existing news references in the entity pages along with their reference date act as our ground-truth to evaluate our approach. In summary, we make the following contributions.",
"The datasets we use for our experimental evaluation are directly extracted from the Wikipedia entity pages and their revision history. The generated data represents one of the contributions of our paper. The datasets are the following:",
"Entity Classes. We focus on a manually predetermined set of entity classes for which we expect to have news coverage. The number of analyzed entity classes is 27, including INLINEFORM0 entities with at least one news reference. The entity classes were selected from the DBpedia class ontology. Figure FIGREF42 shows the number of entities per class for the years (2009-2014).",
"News Articles. We extract all news references from the collected Wikipedia entity pages. The extracted news references are associated with the sections in which they appear. In total there were INLINEFORM0 news references, and after crawling we end up with INLINEFORM1 successfully crawled news articles. The details of the news article distribution, and the number of entities and sections from which they are referred are shown in Table TABREF44 ."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"We evaluate the proposed approach on a news corpus consisting of 351,982 articles crawled from the news external references in Wikipedia from 73,734 entity pages.",
"The datasets we use for our experimental evaluation are directly extracted from the Wikipedia entity pages and their revision history. The generated data represents one of the contributions of our paper. The datasets are the following:",
"Entity Classes. We focus on a manually predetermined set of entity classes for which we expect to have news coverage. The number of analyzed entity classes is 27, including INLINEFORM0 entities with at least one news reference. The entity classes were selected from the DBpedia class ontology. Figure FIGREF42 shows the number of entities per class for the years (2009-2014).",
"News Articles. We extract all news references from the collected Wikipedia entity pages. The extracted news references are associated with the sections in which they appear. "
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
]
},
{
"annotation_id": [
"243fc0e8fcc5e62b3f1884d7b6db784c1bf46c44",
"43c49236b920d9e2197c0f57124c6d4d67acaf34"
],
"answer": [
{
"evidence": [
"Here we introduce the evaluation setup and analyze the results for the article–entity (AEP) placement task. We only report the evaluation metrics for the `relevant' news-entity pairs. A detailed explanation on why we focus on the `relevant' pairs is provided in Section SECREF16 .",
"Baselines. We consider the following baselines for this task.",
"B1. The first baseline uses only the salience-based features by Dunietz and Gillick BIBREF11 .",
"B2. The second baseline assigns the value relevant to a pair INLINEFORM0 , if and only if INLINEFORM1 appears in the title of INLINEFORM2 .",
"Here we show the evaluation setup for ASP task and discuss the results with a focus on three main aspects, (i) the overall performance across the years, (ii) the entity class specific performance, and (iii) the impact on entity profile expansion by suggesting missing sections to entities based on the pre-computed templates.",
"Baselines. To the best of our knowledge, we are not aware of any comparable approach for this task. Therefore, the baselines we consider are the following:",
"S1: Pick the section from template INLINEFORM0 with the highest lexical similarity to INLINEFORM1 : S1 INLINEFORM2",
"S2: Place the news into the most frequent section in INLINEFORM0"
],
"extractive_spans": [],
"free_form_answer": "For Article-Entity placement, they consider two baselines: the first one using only salience-based features, and the second baseline checks if the entity appears in the title of the article. \n\nFor Article-Section Placement, they consider two baselines: the first picks the section with the highest lexical similarity to the article, and the second one picks the most frequent section.",
"highlighted_evidence": [
"Here we introduce the evaluation setup and analyze the results for the article–entity (AEP) placement task. We only report the evaluation metrics for the `relevant' news-entity pairs. ",
"Baselines. We consider the following baselines for this task.\n\nB1. The first baseline uses only the salience-based features by Dunietz and Gillick BIBREF11 .\n\nB2. The second baseline assigns the value relevant to a pair INLINEFORM0 , if and only if INLINEFORM1 appears in the title of INLINEFORM2 .",
"Here we show the evaluation setup for ASP task and discuss the results with a focus on three main aspects, (i) the overall performance across the years, (ii) the entity class specific performance, and (iii) the impact on entity profile expansion by suggesting missing sections to entities based on the pre-computed templates.",
"Baselines. To the best of our knowledge, we are not aware of any comparable approach for this task. Therefore, the baselines we consider are the following:\n\nS1: Pick the section from template INLINEFORM0 with the highest lexical similarity to INLINEFORM1 : S1 INLINEFORM2\n\nS2: Place the news into the most frequent section in INLINEFORM0"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Baselines. We consider the following baselines for this task.",
"B1. The first baseline uses only the salience-based features by Dunietz and Gillick BIBREF11 .",
"B2. The second baseline assigns the value relevant to a pair INLINEFORM0 , if and only if INLINEFORM1 appears in the title of INLINEFORM2 .",
"Baselines. To the best of our knowledge, we are not aware of any comparable approach for this task. Therefore, the baselines we consider are the following:",
"S1: Pick the section from template INLINEFORM0 with the highest lexical similarity to INLINEFORM1 : S1 INLINEFORM2",
"S2: Place the news into the most frequent section in INLINEFORM0"
],
"extractive_spans": [
"B1. The first baseline uses only the salience-based features by Dunietz and Gillick BIBREF11 .",
"B2. The second baseline assigns the value relevant to a pair INLINEFORM0 , if and only if INLINEFORM1 appears in the title of INLINEFORM2 .\n\n",
"S1: Pick the section from template INLINEFORM0 with the highest lexical similarity to INLINEFORM1 : S1 INLINEFORM2",
"S2: Place the news into the most frequent section in INLINEFORM0"
],
"free_form_answer": "",
"highlighted_evidence": [
"Baselines. We consider the following baselines for this task.\n\nB1. The first baseline uses only the salience-based features by Dunietz and Gillick BIBREF11 .\n\nB2. The second baseline assigns the value relevant to a pair INLINEFORM0 , if and only if INLINEFORM1 appears in the title of INLINEFORM2 .",
" Therefore, the baselines we consider are the following:\n\nS1: Pick the section from template INLINEFORM0 with the highest lexical similarity to INLINEFORM1 : S1 INLINEFORM2\n\nS2: Place the news into the most frequent section in INLINEFORM0"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"1d98451a07a700f3064c4035aaf605e880a4e60c",
"33dc51c299eca382a4b9fd7c1df0dd5d43999447"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"News Articles. We extract all news references from the collected Wikipedia entity pages. The extracted news references are associated with the sections in which they appear. In total there were INLINEFORM0 news references, and after crawling we end up with INLINEFORM1 successfully crawled news articles. The details of the news article distribution, and the number of entities and sections from which they are referred are shown in Table TABREF44 ."
],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [
"We evaluate the proposed approach on a news corpus consisting of 351,982 articles crawled from the news external references in Wikipedia from 73,734 entity pages. Given the Wikipedia snapshot at a given year (in our case [2009-2014]), we suggest news articles that might be cited in the coming years. The existing news references in the entity pages along with their reference date act as our ground-truth to evaluate our approach. In summary, we make the following contributions."
],
"extractive_spans": [
" the news external references in Wikipedia"
],
"free_form_answer": "",
"highlighted_evidence": [
"We evaluate the proposed approach on a news corpus consisting of 351,982 articles crawled from the news external references in Wikipedia from 73,734 entity pages."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"72468cd6c3cf3ff64aa9dcddbc5c1ec93ad44585"
],
"answer": [
{
"evidence": [
"We model the ASP placement task as a successor of the AEP task. For all the `relevant' news entity pairs, the task is to determine the correct entity section. Each section in a Wikipedia entity page represents a different topic. For example, Barack Obama has the sections `Early Life', `Presidency', `Family and Personal Life' etc. However, many entity pages have an incomplete section structure. Incomplete or missing sections are due to two Wikipedia properties. First, long-tail entities miss information and sections due to their lack of popularity. Second, for all entities whether popular or not, certain sections might occur for the first time due to real world developments. As an example, the entity Germanwings did not have an `Accidents' section before this year's disaster, which was the first in the history of the airline.",
"Article-Section Ground-truth. The dataset consists of the triple INLINEFORM0 , where INLINEFORM1 , where we assume that INLINEFORM2 has already been determined as relevant. We therefore have a multi-class classification problem where we need to determine the section of INLINEFORM3 where INLINEFORM4 is cited. Similar to the article-entity ground truth, here too the features compute the similarity between INLINEFORM5 , INLINEFORM6 and INLINEFORM7 ."
],
"extractive_spans": [],
"free_form_answer": "They use a multi-class classifier to determine the section it should be cited",
"highlighted_evidence": [
"We model the ASP placement task as a successor of the AEP task. For all the `relevant' news entity pairs, the task is to determine the correct entity section. Each section in a Wikipedia entity page represents a different topic. For example, Barack Obama has the sections `Early Life', `Presidency', `Family and Personal Life' etc.",
"Article-Section Ground-truth. The dataset consists of the triple INLINEFORM0 , where INLINEFORM1 , where we assume that INLINEFORM2 has already been determined as relevant. We therefore have a multi-class classification problem where we need to determine the section of INLINEFORM3 where INLINEFORM4 is cited. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
]
},
{
"annotation_id": [
"9a56da93bfb59d4aeb4b4ca479595407ac969c50",
"fce3ca1cacab1f26651ddf574b0d8726ec50b3c9"
],
"answer": [
{
"evidence": [
"An important feature when suggesting an article INLINEFORM0 to an entity INLINEFORM1 is the novelty of INLINEFORM2 w.r.t the already existing entity profile INLINEFORM3 . Studies BIBREF17 have shown that on comparable collections to ours (TREC GOV2) the number of duplicates can go up to INLINEFORM4 . This figure is likely higher for major events concerning highly authoritative entities on which all news media will report.",
"Given an entity INLINEFORM0 and the already added news references INLINEFORM1 up to year INLINEFORM2 , the novelty of INLINEFORM3 at year INLINEFORM4 is measured by the KL divergence between the language model of INLINEFORM5 and articles in INLINEFORM6 . We combine this measure with the entity overlap of INLINEFORM7 and INLINEFORM8 . The novelty value of INLINEFORM9 is given by the minimal divergence value. Low scores indicate low novelty for the entity profile INLINEFORM10 ."
],
"extractive_spans": [],
"free_form_answer": "KL-divergences of language models for the news article and the already added news references",
"highlighted_evidence": [
"An important feature when suggesting an article INLINEFORM0 to an entity INLINEFORM1 is the novelty of INLINEFORM2 w.r.t the already existing entity profile INLINEFORM3",
"Given an entity INLINEFORM0 and the already added news references INLINEFORM1 up to year INLINEFORM2 , the novelty of INLINEFORM3 at year INLINEFORM4 is measured by the KL divergence between the language model of INLINEFORM5 and articles in INLINEFORM6 . We combine this measure with the entity overlap of INLINEFORM7 and INLINEFORM8 . The novelty value of INLINEFORM9 is given by the minimal divergence value. Low scores indicate low novelty for the entity profile INLINEFORM10 .\n\n"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Given an entity INLINEFORM0 and the already added news references INLINEFORM1 up to year INLINEFORM2 , the novelty of INLINEFORM3 at year INLINEFORM4 is measured by the KL divergence between the language model of INLINEFORM5 and articles in INLINEFORM6 . We combine this measure with the entity overlap of INLINEFORM7 and INLINEFORM8 . The novelty value of INLINEFORM9 is given by the minimal divergence value. Low scores indicate low novelty for the entity profile INLINEFORM10 ."
],
"extractive_spans": [
"KL divergence between the language model of INLINEFORM5 and articles in INLINEFORM6"
],
"free_form_answer": "",
"highlighted_evidence": [
"Given an entity INLINEFORM0 and the already added news references INLINEFORM1 up to year INLINEFORM2 , the novelty of INLINEFORM3 at year INLINEFORM4 is measured by the KL divergence between the language model of INLINEFORM5 and articles in INLINEFORM6 . "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"85f163e41d965aba7f9b8b69f592a89e826f62f7",
"b623fd4b6beaf5d471dcbe6946b1fd836eee8271"
],
"answer": [
{
"evidence": [
"Baseline Features. As discussed in Section SECREF2 , a variety of features that measure salience of an entity in text are available from the NLP community. We reimplemented the ones in Dunietz and Gillick BIBREF11 . This includes a variety of features, e.g. positional features, occurrence frequency and the internal POS structure of the entity and the sentence it occurs in. Table 2 in BIBREF11 gives details."
],
"extractive_spans": [],
"free_form_answer": "Salience features positional features, occurrence frequency and the internal POS structure of the entity and the sentence it occurs in.\nThe relative authority of entity features: comparative relevance of the news article to the different entities occurring in it.",
"highlighted_evidence": [
"As discussed in Section SECREF2 , a variety of features that measure salience of an entity in text are available from the NLP community. We reimplemented the ones in Dunietz and Gillick BIBREF11 . This includes a variety of features, e.g. positional features, occurrence frequency and the internal POS structure of the entity and the sentence it occurs in. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Baseline Features. As discussed in Section SECREF2 , a variety of features that measure salience of an entity in text are available from the NLP community. We reimplemented the ones in Dunietz and Gillick BIBREF11 . This includes a variety of features, e.g. positional features, occurrence frequency and the internal POS structure of the entity and the sentence it occurs in. Table 2 in BIBREF11 gives details.",
"Relative Entity Frequency. Although frequency of mention and positional features play some role in baseline features, their interaction is not modeled by a single feature nor do the positional features encode more than sentence position. We therefore suggest a novel feature called relative entity frequency, INLINEFORM0 , that has three properties.: (i) It rewards entities for occurring throughout the text instead of only in some parts of the text, measured by the number of paragraphs it occurs in (ii) it rewards entities that occur more frequently in the opening paragraphs of an article as we model INLINEFORM1 as an exponential decay function. The decay corresponds to the positional index of the news paragraph. This is inspired by the news-specific discourse structure that tends to give short summaries of the most important facts and entities in the opening paragraphs. (iii) it compares entity frequency to the frequency of its co-occurring mentions as the weight of an entity appearing in a specific paragraph, normalized by the sum of the frequencies of other entities in INLINEFORM2 . DISPLAYFORM0",
"The a priori authority of an entity (denoted by INLINEFORM0 ) can be measured in several ways. We opt for two approaches: (i) probability of entity INLINEFORM1 occurring in the corpus INLINEFORM2 , and (ii) authority assessed through centrality measures like PageRank BIBREF16 . For the second case we construct the graph INLINEFORM3 consisting of entities in INLINEFORM4 and news articles in INLINEFORM5 as vertices. The edges are established between INLINEFORM6 and entities in INLINEFORM7 , that is INLINEFORM8 , and the out-links from INLINEFORM9 , that is INLINEFORM10 (arrows present the edge direction)."
],
"extractive_spans": [
"positional features",
"occurrence frequency",
"internal POS structure of the entity and the sentence it occurs in",
"relative entity frequency",
"centrality measures like PageRank "
],
"free_form_answer": "",
"highlighted_evidence": [
"Baseline Features. As discussed in Section SECREF2 , a variety of features that measure salience of an entity in text are available from the NLP community. We reimplemented the ones in Dunietz and Gillick BIBREF11 . This includes a variety of features, e.g. positional features, occurrence frequency and the internal POS structure of the entity and the sentence it occurs in. Table 2 in BIBREF11 gives details.",
"Relative Entity Frequency. Although frequency of mention and positional features play some role in baseline features, their interaction is not modeled by a single feature nor do the positional features encode more than sentence position. We therefore suggest a novel feature called relative entity frequency, INLINEFORM0 , that has three properties.: (i) It rewards entities for occurring throughout the text instead of only in some parts of the text, measured by the number of paragraphs it occurs in (ii) it rewards entities that occur more frequently in the opening paragraphs of an article as we model INLINEFORM1 as an exponential decay function. ",
"The a priori authority of an entity (denoted by INLINEFORM0 ) can be measured in several ways. We opt for two approaches: (i) probability of entity INLINEFORM1 occurring in the corpus INLINEFORM2 , and (ii) authority assessed through centrality measures like PageRank BIBREF16 . For the second case we construct the graph INLINEFORM3 consisting of entities in INLINEFORM4 and news articles in INLINEFORM5 as vertices. The edges are established between INLINEFORM6 and entities in INLINEFORM7 , that is INLINEFORM8 , and the out-links from INLINEFORM9 , that is INLINEFORM10 (arrows present the edge direction)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five",
"five",
"five"
],
"paper_read": [
"",
"",
"",
"",
"",
""
],
"question": [
"Do they report results only on English data?",
"What baseline model is used?",
"What news article sources are used?",
"How do they determine the exact section to use the input article?",
"What features are used to represent the novelty of news articles to entity pages?",
"What features are used to represent the salience and relative authority of entities?"
],
"question_id": [
"77c34f1033702278f7f044806c1eba0c6ecb8b04",
"2ee715c7c6289669f11a79743a6b2b696073805d",
"61a9ea36ddc37c60d1a51dabcfff9445a2225725",
"cc850bc8245a7ae790e1f59014371d4f35cd46d7",
"984fc3e726848f8f13dfe72b89e3770d00c3a1af",
"fb1227b3681c69f60eb0539e16c5a8cd784177a7"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"search_query": [
"",
"",
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
"",
"",
""
]
} | {
"caption": [
"Figure 1: Comparing how cyclones are reported in Wikipedia entity pages.",
"Figure 2: News suggestion approach overview.",
"Table 1: Article–Entity placement feature summary.",
"Table 2: Feature types used in Fs for suggesting news articles into the entity sections. We compute the features for all s ∈ Ŝc(t− 1) as well as se(t− 1).",
"Figure 3: Number of entities with at least one news reference for different entity classes.",
"Table 3: News articles, entities and sections distribution across years.",
"Table 4: Number of instances for train and test in the AEP and ASP tasks.",
"Table 5: Article–Entity placement task performance.",
"Figure 4: Precision-Recall curve for the article–entity placement task, in blue is shown Fe, and in red is the baseline B1.",
"Figure 5: Feature analysis for the AEP placement task for t = 2009.",
"Figure 6: Article-Section performance averaged for all entity classes for Fs (using SVM) and S2.",
"Figure 7: Correctly suggested news articles for s ∈ Se(t) ∧ s /∈ Se(t− 1).",
"Table 6: Article-Section placement performance (with Fs learned through SVM) for the different entity classes. The results show the standard P/R/F1."
],
"file": [
"1-Figure1-1.png",
"2-Figure2-1.png",
"4-Table1-1.png",
"6-Table2-1.png",
"7-Figure3-1.png",
"7-Table3-1.png",
"8-Table4-1.png",
"8-Table5-1.png",
"9-Figure4-1.png",
"9-Figure5-1.png",
"10-Figure6-1.png",
"10-Figure7-1.png",
"11-Table6-1.png"
]
} | [
"What baseline model is used?",
"How do they determine the exact section to use the input article?",
"What features are used to represent the novelty of news articles to entity pages?",
"What features are used to represent the salience and relative authority of entities?"
] | [
[
"1703.10344-Article-Section Placement-0",
"1703.10344-Article–Entity Placement-20",
"1703.10344-Article–Entity Placement-21",
"1703.10344-Article–Entity Placement-19",
"1703.10344-Article-Section Placement-1",
"1703.10344-Article–Entity Placement-18"
],
[
"1703.10344-Datasets-4",
"1703.10344-Article–Section Placement-0"
],
[
"1703.10344-Article–Entity Placement-14",
"1703.10344-Article–Entity Placement-15"
],
[
"1703.10344-Article–Entity Placement-5",
"1703.10344-Article–Entity Placement-9"
]
] | [
"For Article-Entity placement, they consider two baselines: the first one using only salience-based features, and the second baseline checks if the entity appears in the title of the article. \n\nFor Article-Section Placement, they consider two baselines: the first picks the section with the highest lexical similarity to the article, and the second one picks the most frequent section.",
"They use a multi-class classifier to determine the section it should be cited",
"KL-divergences of language models for the news article and the already added news references",
"Salience features positional features, occurrence frequency and the internal POS structure of the entity and the sentence it occurs in.\nThe relative authority of entity features: comparative relevance of the news article to the different entities occurring in it."
] | 122 |
2003.13032 | Named Entities in Medical Case Reports: Corpus and Experiments | We present a new corpus comprising annotations of medical entities in case reports, originating from PubMed Central's open access library. In the case reports, we annotate cases, conditions, findings, factors and negation modifiers. Moreover, where applicable, we annotate relations between these entities. As such, this is the first corpus of this kind made available to the scientific community in English. It enables the initial investigation of automatic information extraction from case reports through tasks like Named Entity Recognition, Relation Extraction and (sentence/paragraph) relevance detection. Additionally, we present four strong baseline systems for the detection of medical entities made available through the annotated dataset. | {
"paragraphs": [
[
"The automatic processing of medical texts and documents plays an increasingly important role in the recent development of the digital health area. To enable dedicated Natural Language Processing (NLP) that is highly accurate with respect to medically relevant categories, manually annotated data from this domain is needed. One category of high interest and relevance are medical entities. Only very few annotated corpora in the medical domain exist. Many of them focus on the relation between chemicals and diseases or proteins and diseases, such as the BC5CDR corpus BIBREF0, the Comparative Toxicogenomics Database BIBREF1, the FSU PRotein GEne corpus BIBREF2 or the ADE (adverse drug effect) corpus BIBREF3. The NCBI Disease Corpus BIBREF4 contains condition mention annotations along with annotations of symptoms. Several new corpora of annotated case reports were made available recently. grouin-etal-2019-clinical presented a corpus with medical entity annotations of clinical cases written in French, copdPhenotype presented a corpus focusing on phenotypic information for chronic obstructive pulmonary disease while 10.1093/database/bay143 presented a corpus focusing on identifying main finding sentences in case reports.",
"The corpus most comparable to ours is the French corpus of clinical case reports by grouin-etal-2019-clinical. Their annotations are based on UMLS semantic types. Even though there is an overlap in annotated entities, semantic classes are not the same. Lab results are subsumed under findings in our corpus and are not annotated as their own class. Factors extend beyond gender and age and describe any kind of risk factor that contributes to a higher probability of having a certain disease. Our corpus includes additional entity types. We annotate conditions, findings (including medical findings such as blood values), factors, and also modifiers which indicate the negation of other entities as well as case entities, i. e., entities specific to one case report. An overview is available in Table TABREF3."
],
[
"Case reports are standardized in the CARE guidelines BIBREF5. They represent a detailed description of the symptoms, signs, diagnosis, treatment, and follow-up of an individual patient. We focus on documents freely available through PubMed Central (PMC). The presentation of the patient's case can usually be found in a dedicated section or the abstract. We perform a manual annotation of all mentions of case entities, conditions, findings, factors and modifiers. The scope of our manual annotation is limited to the presentation of a patient's signs and symptoms. In addition, we annotate the title of the case report."
],
[
"We annotate the following entities:",
"case entity marks the mention of a patient. A case report can contain more than one case description. Therefore, all the findings, factors and conditions related to one patient are linked to the respective case entity. Within the text, this entity is often represented by the first mention of the patient and overlaps with the factor annotations which can, e. g., mark sex and age (cf. Figure FIGREF12).",
"condition marks a medical disease such as pneumothorax or dislocation of the shoulder.",
"factor marks a feature of a patient which might influence the probability for a specific diagnosis. It can be immutable (e. g., sex and age), describe a specific medical history (e. g., diabetes mellitus) or a behaviour (e. g., smoking).",
"finding marks a sign or symptom a patient shows. This can be visible (e. g., rash), described by a patient (e. g., headache) or measurable (e. g., decreased blood glucose level).",
"negation modifier explicitly negate the presence of a certain finding usually setting the case apart from common cases.",
"We also annotate relations between these entities, where applicable. Since we work on case descriptions, the anchor point of these relations is the case that is described. The following relations are annotated:",
"has relations exist between a case entity and factor, finding or condition entities.",
"modifies relations exist between negation modifiers and findings.",
"causes relations exist between conditions and findings.",
"Example annotations are shown in Figure FIGREF16."
],
[
"We asked medical doctors experienced in extracting knowledge related to medical entities from texts to annotate the entities described above. Initially, we asked four annotators to test our guidelines on two texts. Subsequently, identified issues were discussed and resolved. Following this pilot annotation phase, we asked two different annotators to annotate two case reports according to our guidelines. The same annotators annotated an overall collection of 53 case reports.",
"Inter-annotator agreement is calculated based on two case reports. We reach a Cohen's kappa BIBREF6 of 0.68. Disagreements mainly appear for findings that are rather unspecific such as She no longer eats out with friends which can be seen as a finding referring to “avoidance behaviour”."
],
[
"The annotation was performed using WebAnno BIBREF7, a web-based tool for linguistic annotation. The annotators could choose between a pre-annotated version or a blank version of each text. The pre-annotated versions contained suggested entity spans based on string matches from lists of conditions and findings synonym lists. Their quality varied widely throughout the corpus. The blank version was preferred by the annotators. We distribute the corpus in BioC JSON format. BioC was chosen as it allows us to capture the complexities of the annotations in the biomedical domain. It represented each documents properties ranging from full text, individual passages/sentences along with captured annotations and relationships in an organized manner. BioC is based on character offsets of annotations and allows the stacking of different layers."
],
[
"The corpus consists of 53 documents, which contain an average number of 156.1 sentences per document, each with 19.55 tokens on average. The corpus comprises 8,275 sentences and 167,739 words in total. However, as mentioned above, only case presentation sections, headings and abstracts are annotated. The numbers of annotated entities are summarized in Table TABREF24.",
"Findings are the most frequently annotated type of entity. This makes sense given that findings paint a clinical picture of the patient's condition. The number of tokens per entity ranges from one token for all types to 5 tokens for cases (average length 3.1), nine tokens for conditions (average length 2.0), 16 tokens for factors (average length 2.5), 25 tokens for findings (average length 2.6) and 18 tokens for modifiers (average length 1.4) (cf. Table TABREF24). Examples of rather long entities are given in Table TABREF25.",
"Entities can appear in a discontinuous way. We model this as a relation between two spans which we call “discontinuous” (cf. Figure FIGREF26). Especially findings often appear as discontinuous entities, we found 543 discontinuous finding relations. The numbers for conditions and factors are lower with seven and two, respectively. Entities can also be nested within one another. This happens either when the span of one annotation is completely embedded in the span of another annotation (fully-nested; cf. Figure FIGREF12), or when there is a partial overlapping between the spans of two different entities (partially-nested; cf. Figure FIGREF12). There is a high number of inter-sentential relations in the corpus (cf. Table TABREF27). This can be explained by the fact that the case entity occurs early in each document; furthermore, it is related to finding and factor annotations that are distributed across different sentences.",
"The most frequently annotated relation in our corpus is the has-relation between a case entity and the findings related to that case. This correlates with the high number of finding entities. The relations contained in our corpus are summarized in Table TABREF27."
],
[
"We evaluate the corpus using Named Entity Recognition (NER), i. e., the task of finding mentions of concepts of interest in unstructured text. We focus on detecting cases, conditions, factors, findings and modifiers in case reports (cf. Section SECREF6). We approach this as a sequence labeling problem. Four systems were developed to offer comparable robust baselines.",
"The original documents are pre-processed (sentence splitting and tokenization with ScispaCy). We do not perform stop word removal or lower-casing of the tokens. The BIO labeling scheme is used to capture the order of tokens belonging to the same entity type and enable span-level detection of entities. Detection of nested and/or discontinuous entities is not supported. The annotated corpus is randomized and split in five folds using scikit-learn BIBREF9. Each fold has a train, test and dev split with the test split defined as .15% of the train split. This ensures comparability between the presented systems."
],
[
"Conditional Random Fields (CRF) BIBREF10 are a standard approach when dealing with sequential data in the context of sequence labeling. We use a combination of linguistic and semantic features, with a context window of size five, to describe each of the tokens and the dependencies between them. Hyper-parameter optimization is performed using randomized search and cross validation. Span-based F1 score is used as the optimization metric."
],
[
"Prior to the emergence of deep neural language models, BiLSTM-CRF models BIBREF11 had achieved state-of-the-art results for the task of sequence labeling. We use a BiLSTM-CRF model with both word-level and character-level input. BioWordVec BIBREF12 pre-trained word embeddings are used in the embedding layer for the input representation. A bidirectional LSTM layer is applied to a multiplication of the two input representations. Finally, a CRF layer is applied to predict the sequence of labels. Dropout and L1/L2 regularization is used where applicable. He (uniform) initialization BIBREF13 is used to initialize the kernels of the individual layers. As the loss metric, CRF-based loss is used, while optimizing the model based on the CRF Viterbi accuracy. Additionally, span-based F1 score is used to serialize the best performing model. We train for a maximum of 100 epochs, or until an early stopping criterion is reached (no change in validation loss value grater than 0.01 for ten consecutive epochs). Furthermore, Adam BIBREF14 is used as the optimizer. The learning rate is reduced by a factor of 0.3 in case no significant increase of the optimization metric is achieved in three consecutive epochs."
],
[
"Multi-Task Learning (MTL) BIBREF15 has become popular with the progress in deep learning. This model family is characterized by simultaneous optimization of multiple loss functions and transfer of knowledge achieved this way. The knowledge is transferred through the use of one or multiple shared layers. Through finding supporting patterns in related tasks, MTL provides better generalization on unseen cases and the main tasks we are trying to solve.",
"We rely on the model presented by bekoulis2018joint and reuse the implementation provided by the authors. The model jointly trains two objectives supported by the dataset: the main task of NER and a supporting task of Relation Extraction (RE). Two separate models are developed for each of the tasks. The NER task is solved with the help of a BiLSTM-CRF model, similar to the one presented in Section SECREF32 The RE task is solved by using a multi-head selection approach, where each token can have none or more relationships to in-sentence tokens. Additionally, this model also leverages the output of the NER branch model (the CRF prediction) to learn label embeddings. Shared layers consist of a concatenation of word and character embeddings followed by two bidirectional LSTM layers. We keep most of the parameters suggested by the authors and change (1) the number of training epochs to 100 to allow the comparison to other deep learning approaches in this work, (2) use label embeddings of size 64, (3) allow gradient clipping and (4) use $d=0.8$ as the pre-trained word embedding dropout and $d=0.5$ for all other dropouts. $\\eta =1^{-3}$ is used as the learning rate with the Adam optimizer and tanh activation functions across layers. Although it is possible to use adversarial training BIBREF16, we omit from using it. We also omit the publication of results for the task of RE as we consider it to be a supporting task and no other competing approaches have been developed."
],
[
"Deep neural language models have recently evolved to a successful method for representing text. In particular, Bidirectional Encoder Representations from Transformers (BERT) outperformed previous state-of-the-art methods by a large margin on various NLP tasks BIBREF17. For our experiments, we use BioBERT, an adaptation of BERT for the biomedical domain, pre-trained on PubMed abstracts and PMC full-text articles BIBREF18. The BERT architecture for deriving text representations uses 12 hidden layers, consisting of 768 units each. For NER, token level BIO-tag probabilities are computed with a single output layer based on the representations from the last layer of BERT. We fine-tune the model on the entity recognition task during four training epochs with batch size $b=32$, dropout probability $d=0.1$ and learning rate $\\eta =2^{-5}$. These hyper-parameters are proposed by Devlin2018 for BERT fine-tuning."
],
[
"To evaluate the performance of the four systems, we calculate the span-level precision (P), recall (R) and F1 scores, along with corresponding micro and macro scores. The reported values are shown in Table TABREF29 and are averaged over five folds, utilising the seqeval framework.",
"With a macro avg. F1-score of 0.59, MTL achieves the best result with a significant margin compared to CRF, BiLSTM-CRF and BERT. This confirms the usefulness of jointly training multiple objectives (minimizing multiple loss functions), and enabling knowledge transfer, especially in a setting with limited data (which is usually the case in the biomedical NLP domain). This result also suggest the usefulness of BioBERT for other biomedical datasets as reported by Lee2019. Despite being a rather standard approach, CRF outperforms the more elaborated BiLSTM-CRF, presumably due to data scarcity and class imbalance. We hypothesize that an increase in training data would yield better results for BiLSTM-CRF but not outperform transfer learning approach of MTL (or even BioBERT). In contrast to other common NER corpora, like CoNLL 2003, even the best baseline system only achieves relatively low scores. This outcome is due to the inherent difficulty of the task (annotators are experienced medical doctors) and the small number of training samples."
],
[
"We present a new corpus, developed to facilitate the processing of case reports. The corpus focuses on five distinct entity types: cases, conditions, factors, findings and modifiers. Where applicable, relationships between entities are also annotated. Additionally, we annotate discontinuous entities with a special relationship type (discontinuous). The corpus presented in this paper is the very first of its kind and a valuable addition to the scarce number of corpora available in the field of biomedical NLP. Its complexity, given the discontinuous nature of entities and a high number of nested and multi-label entities, poses new challenges for NLP methods applied for NER and can, hence, be a valuable source for insights into what entities “look like in the wild”. Moreover, it can serve as a playground for new modelling techniques such as the resolution of discontinuous entities as well as multi-task learning given the combination of entities and their relations. We provide an evaluation of four distinct NER systems that will serve as robust baselines for future work but which are, as of yet, unable to solve all the complex challenges this dataset holds. A functional service based on the presented corpus is currently being integrated, as a NER service, in the QURATOR platform BIBREF20."
],
[
"The research presented in this article is funded by the German Federal Ministry of Education and Research (BMBF) through the project QURATOR (Unternehmen Region, Wachstumskern, grant no. 03WKDA1A), see http://qurator.ai. We want to thank our medical experts for their help annotating the data set, especially Ashlee Finckh and Sophie Klopfenstein."
]
],
"section_name": [
"Introduction",
"A Corpus of Medical Case Reports with Medical Entity Annotation ::: Annotation tasks",
"A Corpus of Medical Case Reports with Medical Entity Annotation ::: Annotation Guidelines",
"A Corpus of Medical Case Reports with Medical Entity Annotation ::: Annotators",
"A Corpus of Medical Case Reports with Medical Entity Annotation ::: Annotation Tools and Format",
"A Corpus of Medical Case Reports with Medical Entity Annotation ::: Corpus Overview",
"Baseline systems for Named Entity Recognition in medical case reports",
"Baseline systems for Named Entity Recognition in medical case reports ::: Conditional Random Fields",
"Baseline systems for Named Entity Recognition in medical case reports ::: BiLSTM-CRF",
"Baseline systems for Named Entity Recognition in medical case reports ::: Multi-Task Learning",
"Baseline systems for Named Entity Recognition in medical case reports ::: BioBERT",
"Baseline systems for Named Entity Recognition in medical case reports ::: Evaluation",
"Conclusion",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"7bd0a9b152d2a5c2f94c9b74f37067a253f27018"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"636918e657880bf14fa5d0bb5bd41445f515d6a3",
"b6aaa3454fcea02fb9f7b72d767732f549bcce33"
],
"answer": [
{
"evidence": [
"Baseline systems for Named Entity Recognition in medical case reports ::: Conditional Random Fields",
"Conditional Random Fields (CRF) BIBREF10 are a standard approach when dealing with sequential data in the context of sequence labeling. We use a combination of linguistic and semantic features, with a context window of size five, to describe each of the tokens and the dependencies between them. Hyper-parameter optimization is performed using randomized search and cross validation. Span-based F1 score is used as the optimization metric.",
"Baseline systems for Named Entity Recognition in medical case reports ::: BiLSTM-CRF",
"Prior to the emergence of deep neural language models, BiLSTM-CRF models BIBREF11 had achieved state-of-the-art results for the task of sequence labeling. We use a BiLSTM-CRF model with both word-level and character-level input. BioWordVec BIBREF12 pre-trained word embeddings are used in the embedding layer for the input representation. A bidirectional LSTM layer is applied to a multiplication of the two input representations. Finally, a CRF layer is applied to predict the sequence of labels. Dropout and L1/L2 regularization is used where applicable. He (uniform) initialization BIBREF13 is used to initialize the kernels of the individual layers. As the loss metric, CRF-based loss is used, while optimizing the model based on the CRF Viterbi accuracy. Additionally, span-based F1 score is used to serialize the best performing model. We train for a maximum of 100 epochs, or until an early stopping criterion is reached (no change in validation loss value grater than 0.01 for ten consecutive epochs). Furthermore, Adam BIBREF14 is used as the optimizer. The learning rate is reduced by a factor of 0.3 in case no significant increase of the optimization metric is achieved in three consecutive epochs.",
"Baseline systems for Named Entity Recognition in medical case reports ::: Multi-Task Learning",
"Multi-Task Learning (MTL) BIBREF15 has become popular with the progress in deep learning. This model family is characterized by simultaneous optimization of multiple loss functions and transfer of knowledge achieved this way. The knowledge is transferred through the use of one or multiple shared layers. Through finding supporting patterns in related tasks, MTL provides better generalization on unseen cases and the main tasks we are trying to solve.",
"Baseline systems for Named Entity Recognition in medical case reports ::: BioBERT",
"Deep neural language models have recently evolved to a successful method for representing text. In particular, Bidirectional Encoder Representations from Transformers (BERT) outperformed previous state-of-the-art methods by a large margin on various NLP tasks BIBREF17. For our experiments, we use BioBERT, an adaptation of BERT for the biomedical domain, pre-trained on PubMed abstracts and PMC full-text articles BIBREF18. The BERT architecture for deriving text representations uses 12 hidden layers, consisting of 768 units each. For NER, token level BIO-tag probabilities are computed with a single output layer based on the representations from the last layer of BERT. We fine-tune the model on the entity recognition task during four training epochs with batch size $b=32$, dropout probability $d=0.1$ and learning rate $\\eta =2^{-5}$. These hyper-parameters are proposed by Devlin2018 for BERT fine-tuning."
],
"extractive_spans": [
"Conditional Random Fields",
"BiLSTM-CRF",
"Multi-Task Learning",
"BioBERT\n"
],
"free_form_answer": "",
"highlighted_evidence": [
" Conditional Random Fields\nConditional Random Fields (CRF) BIBREF10 are a standard approach when dealing with sequential data in the context of sequence labeling.",
"BiLSTM-CRF\nPrior to the emergence of deep neural language models, BiLSTM-CRF models BIBREF11 had achieved state-of-the-art results for the task of sequence labeling.",
"Multi-Task Learning\nMulti-Task Learning (MTL) BIBREF15 has become popular with the progress in deep learning.",
"BioBERT\nDeep neural language models have recently evolved to a successful method for representing text. In particular, Bidirectional Encoder Representations from Transformers (BERT) outperformed previous state-of-the-art methods by a large margin on various NLP tasks BIBREF17."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Baseline systems for Named Entity Recognition in medical case reports ::: Conditional Random Fields",
"Conditional Random Fields (CRF) BIBREF10 are a standard approach when dealing with sequential data in the context of sequence labeling. We use a combination of linguistic and semantic features, with a context window of size five, to describe each of the tokens and the dependencies between them. Hyper-parameter optimization is performed using randomized search and cross validation. Span-based F1 score is used as the optimization metric.",
"Baseline systems for Named Entity Recognition in medical case reports ::: BiLSTM-CRF",
"Prior to the emergence of deep neural language models, BiLSTM-CRF models BIBREF11 had achieved state-of-the-art results for the task of sequence labeling. We use a BiLSTM-CRF model with both word-level and character-level input. BioWordVec BIBREF12 pre-trained word embeddings are used in the embedding layer for the input representation. A bidirectional LSTM layer is applied to a multiplication of the two input representations. Finally, a CRF layer is applied to predict the sequence of labels. Dropout and L1/L2 regularization is used where applicable. He (uniform) initialization BIBREF13 is used to initialize the kernels of the individual layers. As the loss metric, CRF-based loss is used, while optimizing the model based on the CRF Viterbi accuracy. Additionally, span-based F1 score is used to serialize the best performing model. We train for a maximum of 100 epochs, or until an early stopping criterion is reached (no change in validation loss value grater than 0.01 for ten consecutive epochs). Furthermore, Adam BIBREF14 is used as the optimizer. The learning rate is reduced by a factor of 0.3 in case no significant increase of the optimization metric is achieved in three consecutive epochs.",
"Baseline systems for Named Entity Recognition in medical case reports ::: Multi-Task Learning",
"Multi-Task Learning (MTL) BIBREF15 has become popular with the progress in deep learning. This model family is characterized by simultaneous optimization of multiple loss functions and transfer of knowledge achieved this way. The knowledge is transferred through the use of one or multiple shared layers. Through finding supporting patterns in related tasks, MTL provides better generalization on unseen cases and the main tasks we are trying to solve.",
"Baseline systems for Named Entity Recognition in medical case reports ::: BioBERT",
"Deep neural language models have recently evolved to a successful method for representing text. In particular, Bidirectional Encoder Representations from Transformers (BERT) outperformed previous state-of-the-art methods by a large margin on various NLP tasks BIBREF17. For our experiments, we use BioBERT, an adaptation of BERT for the biomedical domain, pre-trained on PubMed abstracts and PMC full-text articles BIBREF18. The BERT architecture for deriving text representations uses 12 hidden layers, consisting of 768 units each. For NER, token level BIO-tag probabilities are computed with a single output layer based on the representations from the last layer of BERT. We fine-tune the model on the entity recognition task during four training epochs with batch size $b=32$, dropout probability $d=0.1$ and learning rate $\\eta =2^{-5}$. These hyper-parameters are proposed by Devlin2018 for BERT fine-tuning."
],
"extractive_spans": [
"Conditional Random Fields",
"BiLSTM-CRF",
"Multi-Task Learning",
"BioBERT"
],
"free_form_answer": "",
"highlighted_evidence": [
"Baseline systems for Named Entity Recognition in medical case reports ::: Conditional Random Fields\nConditional Random Fields (CRF) BIBREF10 are a standard approach when dealing with sequential data in the context of sequence labeling. ",
"Baseline systems for Named Entity Recognition in medical case reports ::: BiLSTM-CRF\nPrior to the emergence of deep neural language models, BiLSTM-CRF models BIBREF11 had achieved state-of-the-art results for the task of sequence labeling. We use a BiLSTM-CRF model with both word-level and character-level input.",
"Baseline systems for Named Entity Recognition in medical case reports ::: Multi-Task Learning\nMulti-Task Learning (MTL) BIBREF15 has become popular with the progress in deep learning.",
"Baseline systems for Named Entity Recognition in medical case reports ::: BioBERT\nDeep neural language models have recently evolved to a successful method for representing text. In particular, Bidirectional Encoder Representations from Transformers (BERT) outperformed previous state-of-the-art methods by a large margin on various NLP tasks BIBREF17."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"319e45ac4c43bdbed4c383cdda97e717ba9d8ed4",
"d03ae00def8a016d766a8c8547b30cf1be7f57da"
],
"answer": [
{
"evidence": [
"The corpus consists of 53 documents, which contain an average number of 156.1 sentences per document, each with 19.55 tokens on average. The corpus comprises 8,275 sentences and 167,739 words in total. However, as mentioned above, only case presentation sections, headings and abstracts are annotated. The numbers of annotated entities are summarized in Table TABREF24."
],
"extractive_spans": [
"8,275 sentences and 167,739 words in total"
],
"free_form_answer": "",
"highlighted_evidence": [
"The corpus consists of 53 documents, which contain an average number of 156.1 sentences per document, each with 19.55 tokens on average. The corpus comprises 8,275 sentences and 167,739 words in total."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The corpus consists of 53 documents, which contain an average number of 156.1 sentences per document, each with 19.55 tokens on average. The corpus comprises 8,275 sentences and 167,739 words in total. However, as mentioned above, only case presentation sections, headings and abstracts are annotated. The numbers of annotated entities are summarized in Table TABREF24."
],
"extractive_spans": [
"The corpus comprises 8,275 sentences and 167,739 words in total."
],
"free_form_answer": "",
"highlighted_evidence": [
"The corpus consists of 53 documents, which contain an average number of 156.1 sentences per document, each with 19.55 tokens on average. The corpus comprises 8,275 sentences and 167,739 words in total. However, as mentioned above, only case presentation sections, headings and abstracts are annotated. The numbers of annotated entities are summarized in Table TABREF24."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"6305b16f4add60b48be3b0d47dcfe01c7eb1d929",
"bed93f88bc860f792c0d9d007b32f67b07eceb72"
],
"answer": [
{
"evidence": [
"We asked medical doctors experienced in extracting knowledge related to medical entities from texts to annotate the entities described above. Initially, we asked four annotators to test our guidelines on two texts. Subsequently, identified issues were discussed and resolved. Following this pilot annotation phase, we asked two different annotators to annotate two case reports according to our guidelines. The same annotators annotated an overall collection of 53 case reports.",
"The annotation was performed using WebAnno BIBREF7, a web-based tool for linguistic annotation. The annotators could choose between a pre-annotated version or a blank version of each text. The pre-annotated versions contained suggested entity spans based on string matches from lists of conditions and findings synonym lists. Their quality varied widely throughout the corpus. The blank version was preferred by the annotators. We distribute the corpus in BioC JSON format. BioC was chosen as it allows us to capture the complexities of the annotations in the biomedical domain. It represented each documents properties ranging from full text, individual passages/sentences along with captured annotations and relationships in an organized manner. BioC is based on character offsets of annotations and allows the stacking of different layers."
],
"extractive_spans": [],
"free_form_answer": "Experienced medical doctors used a linguistic annotation tool to annotate entities.",
"highlighted_evidence": [
"We asked medical doctors experienced in extracting knowledge related to medical entities from texts to annotate the entities described above. Initially, we asked four annotators to test our guidelines on two texts. Subsequently, identified issues were discussed and resolved. Following this pilot annotation phase, we asked two different annotators to annotate two case reports according to our guidelines. The same annotators annotated an overall collection of 53 case reports.",
"The annotation was performed using WebAnno BIBREF7, a web-based tool for linguistic annotation. The annotators could choose between a pre-annotated version or a blank version of each text. The pre-annotated versions contained suggested entity spans based on string matches from lists of conditions and findings synonym lists."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The annotation was performed using WebAnno BIBREF7, a web-based tool for linguistic annotation. The annotators could choose between a pre-annotated version or a blank version of each text. The pre-annotated versions contained suggested entity spans based on string matches from lists of conditions and findings synonym lists. Their quality varied widely throughout the corpus. The blank version was preferred by the annotators. We distribute the corpus in BioC JSON format. BioC was chosen as it allows us to capture the complexities of the annotations in the biomedical domain. It represented each documents properties ranging from full text, individual passages/sentences along with captured annotations and relationships in an organized manner. BioC is based on character offsets of annotations and allows the stacking of different layers."
],
"extractive_spans": [
"WebAnno"
],
"free_form_answer": "",
"highlighted_evidence": [
"The annotation was performed using WebAnno BIBREF7, a web-based tool for linguistic annotation."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"63b8f50840e9da875363ff6328409d9af30119d1",
"993b6343bc76471fdd9940825a242508784238b7"
],
"answer": [
{
"evidence": [
"The corpus consists of 53 documents, which contain an average number of 156.1 sentences per document, each with 19.55 tokens on average. The corpus comprises 8,275 sentences and 167,739 words in total. However, as mentioned above, only case presentation sections, headings and abstracts are annotated. The numbers of annotated entities are summarized in Table TABREF24."
],
"extractive_spans": [
"53 documents"
],
"free_form_answer": "",
"highlighted_evidence": [
"The corpus consists of 53 documents, which contain an average number of 156.1 sentences per document, each with 19.55 tokens on average."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The corpus consists of 53 documents, which contain an average number of 156.1 sentences per document, each with 19.55 tokens on average. The corpus comprises 8,275 sentences and 167,739 words in total. However, as mentioned above, only case presentation sections, headings and abstracts are annotated. The numbers of annotated entities are summarized in Table TABREF24."
],
"extractive_spans": [
"53 documents"
],
"free_form_answer": "",
"highlighted_evidence": [
"The corpus consists of 53 documents, which contain an average number of 156.1 sentences per document, each with 19.55 tokens on average. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"537accb24acca9a08b101b6d7d145510aa6690a6",
"6709997368c7c0b56377a4f132ae94f42bb5431e"
],
"answer": [
{
"evidence": [
"Baseline systems for Named Entity Recognition in medical case reports ::: Conditional Random Fields",
"Conditional Random Fields (CRF) BIBREF10 are a standard approach when dealing with sequential data in the context of sequence labeling. We use a combination of linguistic and semantic features, with a context window of size five, to describe each of the tokens and the dependencies between them. Hyper-parameter optimization is performed using randomized search and cross validation. Span-based F1 score is used as the optimization metric.",
"Baseline systems for Named Entity Recognition in medical case reports ::: BiLSTM-CRF",
"Prior to the emergence of deep neural language models, BiLSTM-CRF models BIBREF11 had achieved state-of-the-art results for the task of sequence labeling. We use a BiLSTM-CRF model with both word-level and character-level input. BioWordVec BIBREF12 pre-trained word embeddings are used in the embedding layer for the input representation. A bidirectional LSTM layer is applied to a multiplication of the two input representations. Finally, a CRF layer is applied to predict the sequence of labels. Dropout and L1/L2 regularization is used where applicable. He (uniform) initialization BIBREF13 is used to initialize the kernels of the individual layers. As the loss metric, CRF-based loss is used, while optimizing the model based on the CRF Viterbi accuracy. Additionally, span-based F1 score is used to serialize the best performing model. We train for a maximum of 100 epochs, or until an early stopping criterion is reached (no change in validation loss value grater than 0.01 for ten consecutive epochs). Furthermore, Adam BIBREF14 is used as the optimizer. The learning rate is reduced by a factor of 0.3 in case no significant increase of the optimization metric is achieved in three consecutive epochs.",
"Baseline systems for Named Entity Recognition in medical case reports ::: Multi-Task Learning",
"Multi-Task Learning (MTL) BIBREF15 has become popular with the progress in deep learning. This model family is characterized by simultaneous optimization of multiple loss functions and transfer of knowledge achieved this way. The knowledge is transferred through the use of one or multiple shared layers. Through finding supporting patterns in related tasks, MTL provides better generalization on unseen cases and the main tasks we are trying to solve.",
"Baseline systems for Named Entity Recognition in medical case reports ::: BioBERT",
"Deep neural language models have recently evolved to a successful method for representing text. In particular, Bidirectional Encoder Representations from Transformers (BERT) outperformed previous state-of-the-art methods by a large margin on various NLP tasks BIBREF17. For our experiments, we use BioBERT, an adaptation of BERT for the biomedical domain, pre-trained on PubMed abstracts and PMC full-text articles BIBREF18. The BERT architecture for deriving text representations uses 12 hidden layers, consisting of 768 units each. For NER, token level BIO-tag probabilities are computed with a single output layer based on the representations from the last layer of BERT. We fine-tune the model on the entity recognition task during four training epochs with batch size $b=32$, dropout probability $d=0.1$ and learning rate $\\eta =2^{-5}$. These hyper-parameters are proposed by Devlin2018 for BERT fine-tuning."
],
"extractive_spans": [
"Conditional Random Fields",
"BiLSTM-CRF",
"Multi-Task Learning",
"BioBERT"
],
"free_form_answer": "",
"highlighted_evidence": [
"Conditional Random Fields\nConditional Random Fields (CRF) BIBREF10 are a standard approach when dealing with sequential data in the context of sequence labeling.",
"BiLSTM-CRF\nPrior to the emergence of deep neural language models, BiLSTM-CRF models BIBREF11 had achieved state-of-the-art results for the task of sequence labeling.",
"Multi-Task Learning\nMulti-Task Learning (MTL) BIBREF15 has become popular with the progress in deep learning.",
"BioBERT\nDeep neural language models have recently evolved to a successful method for representing text. In particular, Bidirectional Encoder Representations from Transformers (BERT) outperformed previous state-of-the-art methods by a large margin on various NLP tasks BIBREF17."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Baseline systems for Named Entity Recognition in medical case reports ::: Conditional Random Fields",
"Baseline systems for Named Entity Recognition in medical case reports ::: BiLSTM-CRF",
"Baseline systems for Named Entity Recognition in medical case reports ::: Multi-Task Learning",
"Baseline systems for Named Entity Recognition in medical case reports ::: BioBERT"
],
"extractive_spans": [
"Conditional Random Fields",
"BiLSTM-CRF",
"Multi-Task Learning",
"BioBERT"
],
"free_form_answer": "",
"highlighted_evidence": [
"Baseline systems for Named Entity Recognition in medical case reports ::: Conditional Random Fields",
"Baseline systems for Named Entity Recognition in medical case reports ::: BiLSTM-CRF",
"Baseline systems for Named Entity Recognition in medical case reports ::: Multi-Task Learning",
"Baseline systems for Named Entity Recognition in medical case reports ::: BioBERT"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
],
"nlp_background": [
"two",
"two",
"two",
"two",
"two",
"two"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no",
"no"
],
"question": [
"Do they experiment with other tasks?",
"What baselines do they introduce?",
"How large is the corpus?",
"How was annotation performed?",
"How many documents are in the new corpus?",
"What baseline systems are proposed?"
],
"question_id": [
"8df35c24af9efc3348d3b8d746df116480dfe661",
"277a7e916e65dfefd44d2d05774f95257ac946ae",
"2916bbdb95ef31ab26527ba67961cf5ec94d6afe",
"f2e8497aa16327aa297a7f9f7d156e485fe33945",
"9b76f428b7c8c9fc930aa88ee585a03478bff9b3",
"dd6b378d89c05058e8f49e48fd48f5c458ea2ebc"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe",
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe"
],
"search_query": [
"",
"",
"",
"",
"",
""
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Table 1: Summary overview of relevant and comparable corpora.",
"Table 2: Corpus statistics",
"Table 4: Annotated relations between entities. Relations appear within a sentence (intra-sentential) or across sentences (inter-sentential)",
"Table 3: Examples of long entities per type",
"Table 5: Span-level precision (P), recall (R) and F1-scores (F1) on four distinct baseline NER systems. All scores are computed as average over five-fold cross validation."
],
"file": [
"2-Table1-1.png",
"3-Table2-1.png",
"3-Table4-1.png",
"3-Table3-1.png",
"4-Table5-1.png"
]
} | [
"How was annotation performed?"
] | [
[
"2003.13032-A Corpus of Medical Case Reports with Medical Entity Annotation ::: Annotation Tools and Format-0",
"2003.13032-A Corpus of Medical Case Reports with Medical Entity Annotation ::: Annotators-0"
]
] | [
"Experienced medical doctors used a linguistic annotation tool to annotate entities."
] | 123 |
1910.06592 | FacTweet: Profiling Fake News Twitter Accounts | We present an approach to detect fake news in Twitter at the account level using a neural recurrent model and a variety of different semantic and stylistic features. Our method extracts a set of features from the timelines of news Twitter accounts by reading their posts as chunks, rather than dealing with each tweet independently. We show the experimental benefits of modeling latent stylistic signatures of mixed fake and real news with a sequential model over a wide range of strong baselines. | {
"paragraphs": [
[
"Social media platforms have made the spreading of fake news easier, faster as well as able to reach a wider audience. Social media offer another feature which is the anonymity for the authors, and this opens the door to many suspicious individuals or organizations to utilize these platforms. Recently, there has been an increased number of spreading fake news and rumors over the web and social media BIBREF0. Fake news in social media vary considering the intention to mislead. Some of these news are spread with the intention to be ironic or to deliver the news in an ironic way (satirical news). Others, such as propaganda, hoaxes, and clickbaits, are spread to mislead the audience or to manipulate their opinions. In the case of Twitter, suspicious news annotations should be done on a tweet rather than an account level, since some accounts mix fake with real news. However, these annotations are extremely costly and time consuming – i.e., due to high volume of available tweets Consequently, a first step in this direction, e.g., as a pre-filtering step, can be viewed as the task of detecting fake news at the account level.",
"The main obstacle for detecting suspicious Twitter accounts is due to the behavior of mixing some real news with the misleading ones. Consequently, we investigate ways to detect suspicious accounts by considering their tweets in groups (chunks). Our hypothesis is that suspicious accounts have a unique pattern in posting tweet sequences. Since their intention is to mislead, the way they transition from one set of tweets to the next has a hidden signature, biased by their intentions. Therefore, reading these tweets in chunks has the potential to improve the detection of the fake news accounts.",
"In this work, we investigate the problem of discriminating between factual and non-factual accounts in Twitter. To this end, we collect a large dataset of tweets using a list of propaganda, hoax and clickbait accounts and compare different versions of sequential chunk-based approaches using a variety of feature sets against several baselines. Several approaches have been proposed for news verification, whether in social media (rumors detection) BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, or in news claims BIBREF5, BIBREF6, BIBREF7, BIBREF8. The main orientation in the previous works is to verify the textual claims/tweets but not their sources. To the best of our knowledge, this is the first work aiming to detect factuality at the account level, and especially from a textual perspective. Our contributions are:",
"[leftmargin=4mm]",
"We propose an approach to detect non-factual Twitter accounts by treating post streams as a sequence of tweets' chunks. We test several semantic and dictionary-based features together with a neural sequential approach, and apply an ablation test to investigate their contribution.",
"We benchmark our approach against other approaches that discard the chronological order of the tweets or read the tweets individually. The results show that our approach produces superior results at detecting non-factual accounts."
],
[
"Given a news Twitter account, we read its tweets from the account's timeline. Then we sort the tweets by the posting date in ascending way and we split them into $N$ chunks. Each chunk consists of a sorted sequence of tweets labeled by the label of its corresponding account. We extract a set of features from each chunk and we feed them into a recurrent neural network to model the sequential flow of the chunks' tweets. We use an attention layer with dropout to attend over the most important tweets in each chunk. Finally, the representation is fed into a softmax layer to produce a probability distribution over the account types and thus predict the factuality of the accounts. Since we have many chunks for each account, the label for an account is obtained by taking the majority class of the account's chunks.",
"",
"Input Representation. Let $t$ be a Twitter account that contains $m$ tweets. These tweets are sorted by date and split into a sequence of chunks $ck = \\langle ck_1, \\ldots , ck_n \\rangle $, where each $ck_i$ contains $s$ tweets. Each tweet in $ck_i$ is represented by a vector $v \\in {\\rm I\\!R}^d$ , where $v$ is the concatenation of a set of features' vectors, that is $v = \\langle f_1, \\ldots , f_n \\rangle $. Each feature vector $f_i$ is built by counting the presence of tweet's words in a set of lexical lists. The final representation of the tweet is built by averaging the single word vectors.",
"",
"Features. We argue that different kinds of features like the sentiment of the text, morality, and other text-based features are critical to detect the nonfactual Twitter accounts by utilizing their occurrence during reporting the news in an account's timeline. We employ a rich set of features borrowed from previous works in fake news, bias, and rumors detection BIBREF0, BIBREF1, BIBREF8, BIBREF9.",
"[leftmargin=4mm]",
"Emotion: We build an emotions vector using word occurrences of 15 emotion types from two available emotional lexicons. We use the NRC lexicon BIBREF10, which contains $\\sim $14K words labeled using the eight Plutchik's emotions BIBREF11. The other lexicon is SentiSense BIBREF12 which is a concept-based affective lexicon that attaches emotional meanings to concepts from the WordNet lexical database. It has $\\sim $5.5 words labeled with emotions from a set of 14 emotional categories We use the categories that do not exist in the NRC lexicon.",
"Sentiment: We extract the sentiment of the tweets by employing EffectWordNet BIBREF13, SenticNet BIBREF14, NRC BIBREF10, and subj_lexicon BIBREF15, where each has the two sentiment classes, positive and negative.",
"Morality: Features based on morality foundation theory BIBREF16 where words are labeled in one of the following 10 categories (care, harm, fairness, cheating, loyalty, betrayal, authority, subversion, sanctity, and degradation).",
"Style: We use canonical stylistic features, such as the count of question marks, exclamation marks, consecutive characters and letters, links, hashtags, users' mentions. In addition, we extract the uppercase ratio and the tweet length.",
"Words embeddings: We extract words embeddings of the words of the tweet using $Glove\\-840B-300d$ BIBREF17 pretrained model. The tweet final representation is obtained by averaging its words embeddings.",
"Model. To account for chunk sequences we make use of a de facto standard approach and opt for a recurrent neural model using long short-term memory (LSTM) BIBREF18. In our model, the sequence consists of a sequence of tweets belonging to one chunk (Figure FIGREF11). The LSTM learns the hidden state $h\\textsubscript {t}$ by capturing the previous timesteps (past tweets). The produced hidden state $h\\textsubscript {t}$ at each time step is passed to the attention layer which computes a `context' vector $c\\textsubscript {t}$ as the weighted mean of the state sequence $h$ by:",
"Where $T$ is the total number of timesteps in the input sequence and $\\alpha \\textsubscript {tj}$ is a weight computed at each time step $j$ for each state hj."
],
[
"Data. We build a dataset of Twitter accounts based on two lists annotated in previous works. For the non-factual accounts, we rely on a list of 180 Twitter accounts from BIBREF1. This list was created based on public resources where suspicious Twitter accounts were annotated with the main fake news types (clickbait, propaganda, satire, and hoax). We discard the satire labeled accounts since their intention is not to mislead or deceive. On the other hand, for the factual accounts, we use a list with another 32 Twitter accounts from BIBREF19 that are considered trustworthy by independent third parties. We discard some accounts that publish news in languages other than English (e.g., Russian or Arabic). Moreover, to ensure the quality of the data, we remove the duplicate, media-based, and link-only tweets. For each account, we collect the maximum amount of tweets allowed by Twitter API. Table TABREF13 presents statistics on our dataset.",
"",
"Baselines. We compare our approach (FacTweet) to the following set of baselines:",
"[leftmargin=4mm]",
"LR + Bag-of-words: We aggregate the tweets of a feed and we use a bag-of-words representation with a logistic regression (LR) classifier.",
"Tweet2vec: We use the Bidirectional Gated recurrent neural network model proposed in BIBREF20. We keep the default parameters that were provided with the implementation. To represent the tweets, we use the decoded embedding produced by the model. With this baseline we aim at assessing if the tweets' hashtags may help detecting the non-factual accounts.",
"LR + All Features (tweet-level): We extract all our features from each tweet and feed them into a LR classifier. Here, we do not aggregate over tweets and thus view each tweet independently.",
"LR + All Features (chunk-level): We concatenate the features' vectors of the tweets in a chunk and feed them into a LR classifier.",
"FacTweet (tweet-level): Similar to the FacTweet approach, but at tweet-level; the sequential flow of the tweets is not utilized. We aim at investigating the importance of the sequential flow of tweets.",
"Top-$k$ replies, likes, or re-tweets: Some approaches in rumors detection use the number of replies, likes, and re-tweets to detect rumors BIBREF21. Thus, we extract top $k$ replied, liked or re-tweeted tweets from each account to assess the accounts factuality. We tested different $k$ values between 10 tweets to the max number of tweets from each account. Figure FIGREF24 shows the macro-F1 values for different $k$ values. It seems that $k=500$ for the top replied tweets achieves the highest result. Therefore, we consider this as a baseline.",
"",
"Experimental Setup. We apply a 5 cross-validation on the account's level. For the FacTweet model, we experiment with 25% of the accounts for validation and parameters selection. We use hyperopt library to select the hyper-parameters on the following values: LSTM layer size (16, 32, 64), dropout ($0.0-0.9$), activation function ($relu$, $selu$, $tanh$), optimizer ($sgd$, $adam$, $rmsprop$) with varying the value of the learning rate (1e-1,..,-5), and batch size (4, 8, 16). The validation split is extracted on the class level using stratified sampling: we took a random 25% of the accounts from each class since the dataset is unbalanced. Discarding the classes' size in the splitting process may affect the minority classes (e.g. hoax). For the baselines' classifier, we tested many classifiers and the LR showed the best overall performance.",
"",
"Results. Table TABREF25 presents the results. We present the results using a chunk size of 20, which was found to be the best size on the held-out data. Figure FIGREF24 shows the results of different chunks sizes. FacTweet performs better than the proposed baselines and obtains the highest macro-F1 value of $0.565$. Our results indicate the importance of taking into account the sequence of the tweets in the accounts' timelines. The sequence of these tweets is better captured by our proposed model sequence-agnostic or non-neural classifiers. Moreover, the results demonstrate that the features at tweet-level do not perform well to detect the Twitter accounts factuality, since they obtain a result near to the majority class ($0.18$). Another finding from our experiments shows that the performance of the Tweet2vec is weak. This demonstrates that tweets' hashtags are not informative to detect non-factual accounts. In Table TABREF25, we present ablation tests so as to quantify the contribution of subset of features. The results indicate that most performance gains come from words embeddings, style, and morality features. Other features (emotion and sentiment) show lower importance: nevertheless, they still improve the overall system performance (on average 0.35% Macro-F$_1$ improvement). These performance figures suggest that non-factual accounts use semantic and stylistic hidden signatures mostly while tweeting news, so as to be able to mislead the readers and behave as reputable (i.e., factual) sources. We leave a more fine-grained, diachronic analysis of semantic and stylistic features – how semantic and stylistic signature evolve across time and change across the accounts' timelines – for future work."
],
[
"In this paper, we proposed a model that utilizes chunked timelines of tweets and a recurrent neural model in order to infer the factuality of a Twitter news account. Our experimental results indicate the importance of analyzing tweet stream into chunks, as well as the benefits of heterogeneous knowledge source (i.e., lexica as well as text) in order to capture factuality. In future work, we would like to extend this line of research with further in-depth analysis to understand the flow change of the used features in the accounts' streams. Moreover, we would like to take our approach one step further incorporating explicit temporal information, e.g., using timestamps. Crucially, we are also interested in developing a multilingual version of our approach, for instance by leveraging the now ubiquitous cross-lingual embeddings BIBREF22, BIBREF23."
]
],
"section_name": [
"Introduction",
"Methodology",
"Experiments and Results",
"Conclusions"
]
} | {
"answers": [
{
"annotation_id": [
"9c35730a55551a45ede326acdac1da5e20529510",
"e44654124e1e47dca19bf29c8bbcad4e5ea2b40b"
],
"answer": [
{
"evidence": [
"Data. We build a dataset of Twitter accounts based on two lists annotated in previous works. For the non-factual accounts, we rely on a list of 180 Twitter accounts from BIBREF1. This list was created based on public resources where suspicious Twitter accounts were annotated with the main fake news types (clickbait, propaganda, satire, and hoax). We discard the satire labeled accounts since their intention is not to mislead or deceive. On the other hand, for the factual accounts, we use a list with another 32 Twitter accounts from BIBREF19 that are considered trustworthy by independent third parties. We discard some accounts that publish news in languages other than English (e.g., Russian or Arabic). Moreover, to ensure the quality of the data, we remove the duplicate, media-based, and link-only tweets. For each account, we collect the maximum amount of tweets allowed by Twitter API. Table TABREF13 presents statistics on our dataset."
],
"extractive_spans": [
"For the non-factual accounts, we rely on a list of 180 Twitter accounts from BIBREF1",
"we use a list with another 32 Twitter accounts from BIBREF19 that are considered trustworthy"
],
"free_form_answer": "",
"highlighted_evidence": [
"We build a dataset of Twitter accounts based on two lists annotated in previous works. For the non-factual accounts, we rely on a list of 180 Twitter accounts from BIBREF1.",
"On the other hand, for the factual accounts, we use a list with another 32 Twitter accounts from BIBREF19 that are considered trustworthy by independent third parties."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Data. We build a dataset of Twitter accounts based on two lists annotated in previous works. For the non-factual accounts, we rely on a list of 180 Twitter accounts from BIBREF1. This list was created based on public resources where suspicious Twitter accounts were annotated with the main fake news types (clickbait, propaganda, satire, and hoax). We discard the satire labeled accounts since their intention is not to mislead or deceive. On the other hand, for the factual accounts, we use a list with another 32 Twitter accounts from BIBREF19 that are considered trustworthy by independent third parties. We discard some accounts that publish news in languages other than English (e.g., Russian or Arabic). Moreover, to ensure the quality of the data, we remove the duplicate, media-based, and link-only tweets. For each account, we collect the maximum amount of tweets allowed by Twitter API. Table TABREF13 presents statistics on our dataset."
],
"extractive_spans": [
"public resources where suspicious Twitter accounts were annotated",
"list with another 32 Twitter accounts from BIBREF19 that are considered trustworthy"
],
"free_form_answer": "",
"highlighted_evidence": [
"We build a dataset of Twitter accounts based on two lists annotated in previous works. For the non-factual accounts, we rely on a list of 180 Twitter accounts from BIBREF1. This list was created based on public resources where suspicious Twitter accounts were annotated with the main fake news types (clickbait, propaganda, satire, and hoax). We discard the satire labeled accounts since their intention is not to mislead or deceive. On the other hand, for the factual accounts, we use a list with another 32 Twitter accounts from BIBREF19 that are considered trustworthy by independent third parties. We discard some accounts that publish news in languages other than English (e.g., Russian or Arabic). Moreover, to ensure the quality of the data, we remove the duplicate, media-based, and link-only tweets. For each account, we collect the maximum amount of tweets allowed by Twitter API."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"354142e7e6076bf3fc14e3da127f61ab12fc1b7c",
"b6df7caf1df11da06f2a9339595a572829796e81"
],
"answer": [
{
"evidence": [
"Experimental Setup. We apply a 5 cross-validation on the account's level. For the FacTweet model, we experiment with 25% of the accounts for validation and parameters selection. We use hyperopt library to select the hyper-parameters on the following values: LSTM layer size (16, 32, 64), dropout ($0.0-0.9$), activation function ($relu$, $selu$, $tanh$), optimizer ($sgd$, $adam$, $rmsprop$) with varying the value of the learning rate (1e-1,..,-5), and batch size (4, 8, 16). The validation split is extracted on the class level using stratified sampling: we took a random 25% of the accounts from each class since the dataset is unbalanced. Discarding the classes' size in the splitting process may affect the minority classes (e.g. hoax). For the baselines' classifier, we tested many classifiers and the LR showed the best overall performance."
],
"extractive_spans": [
"relu",
"selu",
"tanh"
],
"free_form_answer": "",
"highlighted_evidence": [
" We use hyperopt library to select the hyper-parameters on the following values: LSTM layer size (16, 32, 64), dropout ($0.0-0.9$), activation function ($relu$, $selu$, $tanh$), optimizer ($sgd$, $adam$, $rmsprop$) with varying the value of the learning rate (1e-1,..,-5), and batch size (4, 8, 16)"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Experimental Setup. We apply a 5 cross-validation on the account's level. For the FacTweet model, we experiment with 25% of the accounts for validation and parameters selection. We use hyperopt library to select the hyper-parameters on the following values: LSTM layer size (16, 32, 64), dropout ($0.0-0.9$), activation function ($relu$, $selu$, $tanh$), optimizer ($sgd$, $adam$, $rmsprop$) with varying the value of the learning rate (1e-1,..,-5), and batch size (4, 8, 16). The validation split is extracted on the class level using stratified sampling: we took a random 25% of the accounts from each class since the dataset is unbalanced. Discarding the classes' size in the splitting process may affect the minority classes (e.g. hoax). For the baselines' classifier, we tested many classifiers and the LR showed the best overall performance."
],
"extractive_spans": [],
"free_form_answer": "Activation function is hyperparameter. Possible values: relu, selu, tanh.",
"highlighted_evidence": [
"We use hyperopt library to select the hyper-parameters on the following values: LSTM layer size (16, 32, 64), dropout ($0.0-0.9$), activation function ($relu$, $selu$, $tanh$), optimizer ($sgd$, $adam$, $rmsprop$) with varying the value of the learning rate (1e-1,..,-5), and batch size (4, 8, 16)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"75f2d6fb63a820d4bde6e03ae11c224285dd1d5a",
"8def7f72faf4d97ba0c1a71418ec09ce81bd48c3"
],
"answer": [
{
"evidence": [
"Baselines. We compare our approach (FacTweet) to the following set of baselines:",
"[leftmargin=4mm]",
"LR + Bag-of-words: We aggregate the tweets of a feed and we use a bag-of-words representation with a logistic regression (LR) classifier.",
"Tweet2vec: We use the Bidirectional Gated recurrent neural network model proposed in BIBREF20. We keep the default parameters that were provided with the implementation. To represent the tweets, we use the decoded embedding produced by the model. With this baseline we aim at assessing if the tweets' hashtags may help detecting the non-factual accounts.",
"LR + All Features (tweet-level): We extract all our features from each tweet and feed them into a LR classifier. Here, we do not aggregate over tweets and thus view each tweet independently.",
"LR + All Features (chunk-level): We concatenate the features' vectors of the tweets in a chunk and feed them into a LR classifier.",
"FacTweet (tweet-level): Similar to the FacTweet approach, but at tweet-level; the sequential flow of the tweets is not utilized. We aim at investigating the importance of the sequential flow of tweets.",
"Top-$k$ replies, likes, or re-tweets: Some approaches in rumors detection use the number of replies, likes, and re-tweets to detect rumors BIBREF21. Thus, we extract top $k$ replied, liked or re-tweeted tweets from each account to assess the accounts factuality. We tested different $k$ values between 10 tweets to the max number of tweets from each account. Figure FIGREF24 shows the macro-F1 values for different $k$ values. It seems that $k=500$ for the top replied tweets achieves the highest result. Therefore, we consider this as a baseline."
],
"extractive_spans": [
"LR + Bag-of-words",
"Tweet2vec",
"LR + All Features (tweet-level)",
"LR + All Features (chunk-level)",
"FacTweet (tweet-level)",
"Top-$k$ replies, likes, or re-tweets"
],
"free_form_answer": "",
"highlighted_evidence": [
"Baselines. We compare our approach (FacTweet) to the following set of baselines:\n\n[leftmargin=4mm]\n\nLR + Bag-of-words: We aggregate the tweets of a feed and we use a bag-of-words representation with a logistic regression (LR) classifier.\n\nTweet2vec: We use the Bidirectional Gated recurrent neural network model proposed in BIBREF20. We keep the default parameters that were provided with the implementation. To represent the tweets, we use the decoded embedding produced by the model. With this baseline we aim at assessing if the tweets' hashtags may help detecting the non-factual accounts.\n\nLR + All Features (tweet-level): We extract all our features from each tweet and feed them into a LR classifier. Here, we do not aggregate over tweets and thus view each tweet independently.\n\nLR + All Features (chunk-level): We concatenate the features' vectors of the tweets in a chunk and feed them into a LR classifier.\n\nFacTweet (tweet-level): Similar to the FacTweet approach, but at tweet-level; the sequential flow of the tweets is not utilized. We aim at investigating the importance of the sequential flow of tweets.\n\nTop-$k$ replies, likes, or re-tweets: Some approaches in rumors detection use the number of replies, likes, and re-tweets to detect rumors BIBREF21. Thus, we extract top $k$ replied, liked or re-tweeted tweets from each account to assess the accounts factuality. We tested different $k$ values between 10 tweets to the max number of tweets from each account. Figure FIGREF24 shows the macro-F1 values for different $k$ values. It seems that $k=500$ for the top replied tweets achieves the highest result. Therefore, we consider this as a baseline."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Baselines. We compare our approach (FacTweet) to the following set of baselines:",
"LR + Bag-of-words: We aggregate the tweets of a feed and we use a bag-of-words representation with a logistic regression (LR) classifier.",
"Tweet2vec: We use the Bidirectional Gated recurrent neural network model proposed in BIBREF20. We keep the default parameters that were provided with the implementation. To represent the tweets, we use the decoded embedding produced by the model. With this baseline we aim at assessing if the tweets' hashtags may help detecting the non-factual accounts.",
"LR + All Features (tweet-level): We extract all our features from each tweet and feed them into a LR classifier. Here, we do not aggregate over tweets and thus view each tweet independently.",
"LR + All Features (chunk-level): We concatenate the features' vectors of the tweets in a chunk and feed them into a LR classifier.",
"FacTweet (tweet-level): Similar to the FacTweet approach, but at tweet-level; the sequential flow of the tweets is not utilized. We aim at investigating the importance of the sequential flow of tweets.",
"Top-$k$ replies, likes, or re-tweets: Some approaches in rumors detection use the number of replies, likes, and re-tweets to detect rumors BIBREF21. Thus, we extract top $k$ replied, liked or re-tweeted tweets from each account to assess the accounts factuality. We tested different $k$ values between 10 tweets to the max number of tweets from each account. Figure FIGREF24 shows the macro-F1 values for different $k$ values. It seems that $k=500$ for the top replied tweets achieves the highest result. Therefore, we consider this as a baseline."
],
"extractive_spans": [
"Top-$k$ replies, likes, or re-tweets",
"FacTweet (tweet-level)",
"LR + All Features (chunk-level)",
"LR + All Features (tweet-level)",
"Tweet2vec",
"LR + Bag-of-words"
],
"free_form_answer": "",
"highlighted_evidence": [
"Baselines. We compare our approach (FacTweet) to the following set of baselines:",
"LR + Bag-of-words: We aggregate the tweets of a feed and we use a bag-of-words representation with a logistic regression (LR) classifier.\n\nTweet2vec: We use the Bidirectional Gated recurrent neural network model proposed in BIBREF20. We keep the default parameters that were provided with the implementation. To represent the tweets, we use the decoded embedding produced by the model. With this baseline we aim at assessing if the tweets' hashtags may help detecting the non-factual accounts.\n\nLR + All Features (tweet-level): We extract all our features from each tweet and feed them into a LR classifier. Here, we do not aggregate over tweets and thus view each tweet independently.\n\nLR + All Features (chunk-level): We concatenate the features' vectors of the tweets in a chunk and feed them into a LR classifier.\n\nFacTweet (tweet-level): Similar to the FacTweet approach, but at tweet-level; the sequential flow of the tweets is not utilized. We aim at investigating the importance of the sequential flow of tweets.\n\nTop-$k$ replies, likes, or re-tweets: Some approaches in rumors detection use the number of replies, likes, and re-tweets to detect rumors BIBREF21. Thus, we extract top $k$ replied, liked or re-tweeted tweets from each account to assess the accounts factuality. We tested different $k$ values between 10 tweets to the max number of tweets from each account. Figure FIGREF24 shows the macro-F1 values for different $k$ values. It seems that $k=500$ for the top replied tweets achieves the highest result. Therefore, we consider this as a baseline."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"1639d84b9ba1f6eff4d0e738504610496a644ce7",
"4e7a567c672697689e8ea69c146b8c0edadde7dc"
],
"answer": [
{
"evidence": [
"Given a news Twitter account, we read its tweets from the account's timeline. Then we sort the tweets by the posting date in ascending way and we split them into $N$ chunks. Each chunk consists of a sorted sequence of tweets labeled by the label of its corresponding account. We extract a set of features from each chunk and we feed them into a recurrent neural network to model the sequential flow of the chunks' tweets. We use an attention layer with dropout to attend over the most important tweets in each chunk. Finally, the representation is fed into a softmax layer to produce a probability distribution over the account types and thus predict the factuality of the accounts. Since we have many chunks for each account, the label for an account is obtained by taking the majority class of the account's chunks.",
"The main obstacle for detecting suspicious Twitter accounts is due to the behavior of mixing some real news with the misleading ones. Consequently, we investigate ways to detect suspicious accounts by considering their tweets in groups (chunks). Our hypothesis is that suspicious accounts have a unique pattern in posting tweet sequences. Since their intention is to mislead, the way they transition from one set of tweets to the next has a hidden signature, biased by their intentions. Therefore, reading these tweets in chunks has the potential to improve the detection of the fake news accounts."
],
"extractive_spans": [],
"free_form_answer": "Chunks is group of tweets from single account that is consecutive in time - idea is that this group can show secret intention of malicious accounts.",
"highlighted_evidence": [
"Given a news Twitter account, we read its tweets from the account's timeline. Then we sort the tweets by the posting date in ascending way and we split them into $N$ chunks. Each chunk consists of a sorted sequence of tweets labeled by the label of its corresponding account.",
"Consequently, we investigate ways to detect suspicious accounts by considering their tweets in groups (chunks). Our hypothesis is that suspicious accounts have a unique pattern in posting tweet sequences. Since their intention is to mislead, the way they transition from one set of tweets to the next has a hidden signature, biased by their intentions. Therefore, reading these tweets in chunks has the potential to improve the detection of the fake news accounts."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Input Representation. Let $t$ be a Twitter account that contains $m$ tweets. These tweets are sorted by date and split into a sequence of chunks $ck = \\langle ck_1, \\ldots , ck_n \\rangle $, where each $ck_i$ contains $s$ tweets. Each tweet in $ck_i$ is represented by a vector $v \\in {\\rm I\\!R}^d$ , where $v$ is the concatenation of a set of features' vectors, that is $v = \\langle f_1, \\ldots , f_n \\rangle $. Each feature vector $f_i$ is built by counting the presence of tweet's words in a set of lexical lists. The final representation of the tweet is built by averaging the single word vectors."
],
"extractive_spans": [],
"free_form_answer": "sequence of $s$ tweets",
"highlighted_evidence": [
"These tweets are sorted by date and split into a sequence of chunks $ck = \\langle ck_1, \\ldots , ck_n \\rangle $, where each $ck_i$ contains $s$ tweets."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe"
]
},
{
"annotation_id": [
"1aeae5cb25a43e9854ff588d0cfae837c21b358f",
"57415a2314d05798ae4e21b2ba697e04804d144e"
],
"answer": [
{
"evidence": [
"Features. We argue that different kinds of features like the sentiment of the text, morality, and other text-based features are critical to detect the nonfactual Twitter accounts by utilizing their occurrence during reporting the news in an account's timeline. We employ a rich set of features borrowed from previous works in fake news, bias, and rumors detection BIBREF0, BIBREF1, BIBREF8, BIBREF9.",
"[leftmargin=4mm]",
"Emotion: We build an emotions vector using word occurrences of 15 emotion types from two available emotional lexicons. We use the NRC lexicon BIBREF10, which contains $\\sim $14K words labeled using the eight Plutchik's emotions BIBREF11. The other lexicon is SentiSense BIBREF12 which is a concept-based affective lexicon that attaches emotional meanings to concepts from the WordNet lexical database. It has $\\sim $5.5 words labeled with emotions from a set of 14 emotional categories We use the categories that do not exist in the NRC lexicon.",
"Sentiment: We extract the sentiment of the tweets by employing EffectWordNet BIBREF13, SenticNet BIBREF14, NRC BIBREF10, and subj_lexicon BIBREF15, where each has the two sentiment classes, positive and negative.",
"Morality: Features based on morality foundation theory BIBREF16 where words are labeled in one of the following 10 categories (care, harm, fairness, cheating, loyalty, betrayal, authority, subversion, sanctity, and degradation).",
"Style: We use canonical stylistic features, such as the count of question marks, exclamation marks, consecutive characters and letters, links, hashtags, users' mentions. In addition, we extract the uppercase ratio and the tweet length.",
"Words embeddings: We extract words embeddings of the words of the tweet using $Glove\\-840B-300d$ BIBREF17 pretrained model. The tweet final representation is obtained by averaging its words embeddings."
],
"extractive_spans": [
"Sentiment",
"Morality",
"Style",
"Words embeddings"
],
"free_form_answer": "",
"highlighted_evidence": [
"Features. We argue that different kinds of features like the sentiment of the text, morality, and other text-based features are critical to detect the nonfactual Twitter accounts by utilizing their occurrence during reporting the news in an account's timeline. We employ a rich set of features borrowed from previous works in fake news, bias, and rumors detection BIBREF0, BIBREF1, BIBREF8, BIBREF9.\n\n[leftmargin=4mm]\n\nEmotion: We build an emotions vector using word occurrences of 15 emotion types from two available emotional lexicons. We use the NRC lexicon BIBREF10, which contains $\\sim $14K words labeled using the eight Plutchik's emotions BIBREF11. The other lexicon is SentiSense BIBREF12 which is a concept-based affective lexicon that attaches emotional meanings to concepts from the WordNet lexical database. It has $\\sim $5.5 words labeled with emotions from a set of 14 emotional categories We use the categories that do not exist in the NRC lexicon.\n\nSentiment: We extract the sentiment of the tweets by employing EffectWordNet BIBREF13, SenticNet BIBREF14, NRC BIBREF10, and subj_lexicon BIBREF15, where each has the two sentiment classes, positive and negative.\n\nMorality: Features based on morality foundation theory BIBREF16 where words are labeled in one of the following 10 categories (care, harm, fairness, cheating, loyalty, betrayal, authority, subversion, sanctity, and degradation).\n\nStyle: We use canonical stylistic features, such as the count of question marks, exclamation marks, consecutive characters and letters, links, hashtags, users' mentions. In addition, we extract the uppercase ratio and the tweet length.\n\nWords embeddings: We extract words embeddings of the words of the tweet using $Glove\\-840B-300d$ BIBREF17 pretrained model. The tweet final representation is obtained by averaging its words embeddings."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Emotion: We build an emotions vector using word occurrences of 15 emotion types from two available emotional lexicons. We use the NRC lexicon BIBREF10, which contains $\\sim $14K words labeled using the eight Plutchik's emotions BIBREF11. The other lexicon is SentiSense BIBREF12 which is a concept-based affective lexicon that attaches emotional meanings to concepts from the WordNet lexical database. It has $\\sim $5.5 words labeled with emotions from a set of 14 emotional categories We use the categories that do not exist in the NRC lexicon.",
"Sentiment: We extract the sentiment of the tweets by employing EffectWordNet BIBREF13, SenticNet BIBREF14, NRC BIBREF10, and subj_lexicon BIBREF15, where each has the two sentiment classes, positive and negative.",
"Morality: Features based on morality foundation theory BIBREF16 where words are labeled in one of the following 10 categories (care, harm, fairness, cheating, loyalty, betrayal, authority, subversion, sanctity, and degradation).",
"Style: We use canonical stylistic features, such as the count of question marks, exclamation marks, consecutive characters and letters, links, hashtags, users' mentions. In addition, we extract the uppercase ratio and the tweet length.",
"Words embeddings: We extract words embeddings of the words of the tweet using $Glove\\-840B-300d$ BIBREF17 pretrained model. The tweet final representation is obtained by averaging its words embeddings."
],
"extractive_spans": [
"15 emotion types",
"sentiment classes, positive and negative",
"care, harm, fairness, cheating, loyalty, betrayal, authority, subversion, sanctity, and degradation",
"count of question marks",
"exclamation marks",
"consecutive characters and letters",
"links",
"hashtags",
"users' mentions",
"uppercase ratio",
"tweet length",
"words embeddings"
],
"free_form_answer": "",
"highlighted_evidence": [
"Emotion: We build an emotions vector using word occurrences of 15 emotion types from two available emotional lexicons. ",
"Sentiment: We extract the sentiment of the tweets by employing EffectWordNet BIBREF13, SenticNet BIBREF14, NRC BIBREF10, and subj_lexicon BIBREF15, where each has the two sentiment classes, positive and negative.",
"Morality: Features based on morality foundation theory BIBREF16 where words are labeled in one of the following 10 categories (care, harm, fairness, cheating, loyalty, betrayal, authority, subversion, sanctity, and degradation).",
"Style: We use canonical stylistic features, such as the count of question marks, exclamation marks, consecutive characters and letters, links, hashtags, users' mentions. In addition, we extract the uppercase ratio and the tweet length.",
"Words embeddings: We extract words embeddings of the words of the tweet using $Glove\\-840B-300d$ BIBREF17 pretrained model."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe"
]
},
{
"annotation_id": [
"61b366f5d6c98eb3cb698454b489947248cd54d4"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"6b00b3505df58b07d9e07fc3c4aaa67ed5692e57"
],
"answer": [
{
"evidence": [
"Given a news Twitter account, we read its tweets from the account's timeline. Then we sort the tweets by the posting date in ascending way and we split them into $N$ chunks. Each chunk consists of a sorted sequence of tweets labeled by the label of its corresponding account. We extract a set of features from each chunk and we feed them into a recurrent neural network to model the sequential flow of the chunks' tweets. We use an attention layer with dropout to attend over the most important tweets in each chunk. Finally, the representation is fed into a softmax layer to produce a probability distribution over the account types and thus predict the factuality of the accounts. Since we have many chunks for each account, the label for an account is obtained by taking the majority class of the account's chunks."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Given a news Twitter account, we read its tweets from the account's timeline. Then we sort the tweets by the posting date in ascending way and we split them into $N$ chunks. Each chunk consists of a sorted sequence of tweets labeled by the label of its corresponding account."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"2e668e6f7e3841001b9750e54970f27b0e9d0d8e",
"6ea880250039b0464ba7b72a9a0c54b9b132caa5"
],
"answer": [
{
"evidence": [
"Results. Table TABREF25 presents the results. We present the results using a chunk size of 20, which was found to be the best size on the held-out data. Figure FIGREF24 shows the results of different chunks sizes. FacTweet performs better than the proposed baselines and obtains the highest macro-F1 value of $0.565$. Our results indicate the importance of taking into account the sequence of the tweets in the accounts' timelines. The sequence of these tweets is better captured by our proposed model sequence-agnostic or non-neural classifiers. Moreover, the results demonstrate that the features at tweet-level do not perform well to detect the Twitter accounts factuality, since they obtain a result near to the majority class ($0.18$). Another finding from our experiments shows that the performance of the Tweet2vec is weak. This demonstrates that tweets' hashtags are not informative to detect non-factual accounts. In Table TABREF25, we present ablation tests so as to quantify the contribution of subset of features. The results indicate that most performance gains come from words embeddings, style, and morality features. Other features (emotion and sentiment) show lower importance: nevertheless, they still improve the overall system performance (on average 0.35% Macro-F$_1$ improvement). These performance figures suggest that non-factual accounts use semantic and stylistic hidden signatures mostly while tweeting news, so as to be able to mislead the readers and behave as reputable (i.e., factual) sources. We leave a more fine-grained, diachronic analysis of semantic and stylistic features – how semantic and stylistic signature evolve across time and change across the accounts' timelines – for future work."
],
"extractive_spans": [
"words embeddings, style, and morality features"
],
"free_form_answer": "",
"highlighted_evidence": [
"The results indicate that most performance gains come from words embeddings, style, and morality features. Other features (emotion and sentiment) show lower importance: nevertheless, they still improve the overall system performance (on average 0.35% Macro-F$_1$ improvement)"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Results. Table TABREF25 presents the results. We present the results using a chunk size of 20, which was found to be the best size on the held-out data. Figure FIGREF24 shows the results of different chunks sizes. FacTweet performs better than the proposed baselines and obtains the highest macro-F1 value of $0.565$. Our results indicate the importance of taking into account the sequence of the tweets in the accounts' timelines. The sequence of these tweets is better captured by our proposed model sequence-agnostic or non-neural classifiers. Moreover, the results demonstrate that the features at tweet-level do not perform well to detect the Twitter accounts factuality, since they obtain a result near to the majority class ($0.18$). Another finding from our experiments shows that the performance of the Tweet2vec is weak. This demonstrates that tweets' hashtags are not informative to detect non-factual accounts. In Table TABREF25, we present ablation tests so as to quantify the contribution of subset of features. The results indicate that most performance gains come from words embeddings, style, and morality features. Other features (emotion and sentiment) show lower importance: nevertheless, they still improve the overall system performance (on average 0.35% Macro-F$_1$ improvement). These performance figures suggest that non-factual accounts use semantic and stylistic hidden signatures mostly while tweeting news, so as to be able to mislead the readers and behave as reputable (i.e., factual) sources. We leave a more fine-grained, diachronic analysis of semantic and stylistic features – how semantic and stylistic signature evolve across time and change across the accounts' timelines – for future work."
],
"extractive_spans": [
"words embeddings, style, and morality features"
],
"free_form_answer": "",
"highlighted_evidence": [
"The results indicate that most performance gains come from words embeddings, style, and morality features."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe"
]
},
{
"annotation_id": [
"1f0425c0149ff9aae26501ac7384903193e15451",
"530bd6be6f77d4579d2a853a1681569e33faadf8"
],
"answer": [
{
"evidence": [
"Data. We build a dataset of Twitter accounts based on two lists annotated in previous works. For the non-factual accounts, we rely on a list of 180 Twitter accounts from BIBREF1. This list was created based on public resources where suspicious Twitter accounts were annotated with the main fake news types (clickbait, propaganda, satire, and hoax). We discard the satire labeled accounts since their intention is not to mislead or deceive. On the other hand, for the factual accounts, we use a list with another 32 Twitter accounts from BIBREF19 that are considered trustworthy by independent third parties. We discard some accounts that publish news in languages other than English (e.g., Russian or Arabic). Moreover, to ensure the quality of the data, we remove the duplicate, media-based, and link-only tweets. For each account, we collect the maximum amount of tweets allowed by Twitter API. Table TABREF13 presents statistics on our dataset.",
"FLOAT SELECTED: Table 1: Statistics on the data with respect to each account type: propaganda (P), clickbait (C), hoax (H), and real news (R)."
],
"extractive_spans": [],
"free_form_answer": "Total dataset size: 171 account (522967 tweets)",
"highlighted_evidence": [
"Table TABREF13 presents statistics on our dataset.",
"FLOAT SELECTED: Table 1: Statistics on the data with respect to each account type: propaganda (P), clickbait (C), hoax (H), and real news (R)."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Data. We build a dataset of Twitter accounts based on two lists annotated in previous works. For the non-factual accounts, we rely on a list of 180 Twitter accounts from BIBREF1. This list was created based on public resources where suspicious Twitter accounts were annotated with the main fake news types (clickbait, propaganda, satire, and hoax). We discard the satire labeled accounts since their intention is not to mislead or deceive. On the other hand, for the factual accounts, we use a list with another 32 Twitter accounts from BIBREF19 that are considered trustworthy by independent third parties. We discard some accounts that publish news in languages other than English (e.g., Russian or Arabic). Moreover, to ensure the quality of the data, we remove the duplicate, media-based, and link-only tweets. For each account, we collect the maximum amount of tweets allowed by Twitter API. Table TABREF13 presents statistics on our dataset."
],
"extractive_spans": [],
"free_form_answer": "212 accounts",
"highlighted_evidence": [
" For the non-factual accounts, we rely on a list of 180 Twitter accounts from BIBREF1.",
"On the other hand, for the factual accounts, we use a list with another 32 Twitter accounts from BIBREF19 that are considered trustworthy by independent third parties."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe"
]
},
{
"annotation_id": [
"84c478d92d97a8d0f982802371eeedbe4786c979",
"b98c99aa99b1aebdddd040e8cef5e6e23a7786c0"
],
"answer": [
{
"evidence": [
"The main obstacle for detecting suspicious Twitter accounts is due to the behavior of mixing some real news with the misleading ones. Consequently, we investigate ways to detect suspicious accounts by considering their tweets in groups (chunks). Our hypothesis is that suspicious accounts have a unique pattern in posting tweet sequences. Since their intention is to mislead, the way they transition from one set of tweets to the next has a hidden signature, biased by their intentions. Therefore, reading these tweets in chunks has the potential to improve the detection of the fake news accounts.",
"Given a news Twitter account, we read its tweets from the account's timeline. Then we sort the tweets by the posting date in ascending way and we split them into $N$ chunks. Each chunk consists of a sorted sequence of tweets labeled by the label of its corresponding account. We extract a set of features from each chunk and we feed them into a recurrent neural network to model the sequential flow of the chunks' tweets. We use an attention layer with dropout to attend over the most important tweets in each chunk. Finally, the representation is fed into a softmax layer to produce a probability distribution over the account types and thus predict the factuality of the accounts. Since we have many chunks for each account, the label for an account is obtained by taking the majority class of the account's chunks."
],
"extractive_spans": [
"chunk consists of a sorted sequence of tweets labeled by the label of its corresponding account"
],
"free_form_answer": "",
"highlighted_evidence": [
"Consequently, we investigate ways to detect suspicious accounts by considering their tweets in groups (chunks). Our hypothesis is that suspicious accounts have a unique pattern in posting tweet sequences. Since their intention is to mislead, the way they transition from one set of tweets to the next has a hidden signature, biased by their intentions.",
"Given a news Twitter account, we read its tweets from the account's timeline. Then we sort the tweets by the posting date in ascending way and we split them into $N$ chunks. Each chunk consists of a sorted sequence of tweets labeled by the label of its corresponding account."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Input Representation. Let $t$ be a Twitter account that contains $m$ tweets. These tweets are sorted by date and split into a sequence of chunks $ck = \\langle ck_1, \\ldots , ck_n \\rangle $, where each $ck_i$ contains $s$ tweets. Each tweet in $ck_i$ is represented by a vector $v \\in {\\rm I\\!R}^d$ , where $v$ is the concatenation of a set of features' vectors, that is $v = \\langle f_1, \\ldots , f_n \\rangle $. Each feature vector $f_i$ is built by counting the presence of tweet's words in a set of lexical lists. The final representation of the tweet is built by averaging the single word vectors."
],
"extractive_spans": [],
"free_form_answer": "sequence of $s$ tweets",
"highlighted_evidence": [
"These tweets are sorted by date and split into a sequence of chunks $ck = \\langle ck_1, \\ldots , ck_n \\rangle $, where each $ck_i$ contains $s$ tweets."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe"
]
},
{
"annotation_id": [
"19a667e79c85ac90d6e245a38b916e97646823b0",
"9f3c9731db806e072dadf41ca2af66416a73bb2f"
],
"answer": [
{
"evidence": [
"Baselines. We compare our approach (FacTweet) to the following set of baselines:",
"LR + Bag-of-words: We aggregate the tweets of a feed and we use a bag-of-words representation with a logistic regression (LR) classifier.",
"Tweet2vec: We use the Bidirectional Gated recurrent neural network model proposed in BIBREF20. We keep the default parameters that were provided with the implementation. To represent the tweets, we use the decoded embedding produced by the model. With this baseline we aim at assessing if the tweets' hashtags may help detecting the non-factual accounts.",
"LR + All Features (tweet-level): We extract all our features from each tweet and feed them into a LR classifier. Here, we do not aggregate over tweets and thus view each tweet independently.",
"LR + All Features (chunk-level): We concatenate the features' vectors of the tweets in a chunk and feed them into a LR classifier.",
"FacTweet (tweet-level): Similar to the FacTweet approach, but at tweet-level; the sequential flow of the tweets is not utilized. We aim at investigating the importance of the sequential flow of tweets.",
"Top-$k$ replies, likes, or re-tweets: Some approaches in rumors detection use the number of replies, likes, and re-tweets to detect rumors BIBREF21. Thus, we extract top $k$ replied, liked or re-tweeted tweets from each account to assess the accounts factuality. We tested different $k$ values between 10 tweets to the max number of tweets from each account. Figure FIGREF24 shows the macro-F1 values for different $k$ values. It seems that $k=500$ for the top replied tweets achieves the highest result. Therefore, we consider this as a baseline."
],
"extractive_spans": [
"LR + Bag-of-words",
"Tweet2vec",
"LR + All Features (tweet-level)",
"LR + All Features (chunk-level)",
"FacTweet (tweet-level)",
"Top-$k$ replies, likes, or re-tweets"
],
"free_form_answer": "",
"highlighted_evidence": [
"Baselines. We compare our approach (FacTweet) to the following set of baselines:",
"LR + Bag-of-words: We aggregate the tweets of a feed and we use a bag-of-words representation with a logistic regression (LR) classifier.\n\nTweet2vec: We use the Bidirectional Gated recurrent neural network model proposed in BIBREF20. We keep the default parameters that were provided with the implementation. To represent the tweets, we use the decoded embedding produced by the model. With this baseline we aim at assessing if the tweets' hashtags may help detecting the non-factual accounts.\n\nLR + All Features (tweet-level): We extract all our features from each tweet and feed them into a LR classifier. Here, we do not aggregate over tweets and thus view each tweet independently.\n\nLR + All Features (chunk-level): We concatenate the features' vectors of the tweets in a chunk and feed them into a LR classifier.\n\nFacTweet (tweet-level): Similar to the FacTweet approach, but at tweet-level; the sequential flow of the tweets is not utilized. We aim at investigating the importance of the sequential flow of tweets.\n\nTop-$k$ replies, likes, or re-tweets: Some approaches in rumors detection use the number of replies, likes, and re-tweets to detect rumors BIBREF21. Thus, we extract top $k$ replied, liked or re-tweeted tweets from each account to assess the accounts factuality. We tested different $k$ values between 10 tweets to the max number of tweets from each account. Figure FIGREF24 shows the macro-F1 values for different $k$ values. It seems that $k=500$ for the top replied tweets achieves the highest result. Therefore, we consider this as a baseline."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Baselines. We compare our approach (FacTweet) to the following set of baselines:",
"[leftmargin=4mm]",
"LR + Bag-of-words: We aggregate the tweets of a feed and we use a bag-of-words representation with a logistic regression (LR) classifier.",
"Tweet2vec: We use the Bidirectional Gated recurrent neural network model proposed in BIBREF20. We keep the default parameters that were provided with the implementation. To represent the tweets, we use the decoded embedding produced by the model. With this baseline we aim at assessing if the tweets' hashtags may help detecting the non-factual accounts.",
"LR + All Features (tweet-level): We extract all our features from each tweet and feed them into a LR classifier. Here, we do not aggregate over tweets and thus view each tweet independently.",
"LR + All Features (chunk-level): We concatenate the features' vectors of the tweets in a chunk and feed them into a LR classifier.",
"FacTweet (tweet-level): Similar to the FacTweet approach, but at tweet-level; the sequential flow of the tweets is not utilized. We aim at investigating the importance of the sequential flow of tweets.",
"Top-$k$ replies, likes, or re-tweets: Some approaches in rumors detection use the number of replies, likes, and re-tweets to detect rumors BIBREF21. Thus, we extract top $k$ replied, liked or re-tweeted tweets from each account to assess the accounts factuality. We tested different $k$ values between 10 tweets to the max number of tweets from each account. Figure FIGREF24 shows the macro-F1 values for different $k$ values. It seems that $k=500$ for the top replied tweets achieves the highest result. Therefore, we consider this as a baseline."
],
"extractive_spans": [
"LR + Bag-of-words",
"Tweet2vec",
"LR + All Features (tweet-level)",
"LR + All Features (chunk-level)",
"FacTweet (tweet-level)",
"Top-$k$ replies, likes, or re-tweets"
],
"free_form_answer": "",
"highlighted_evidence": [
"Baselines. We compare our approach (FacTweet) to the following set of baselines:\n\n[leftmargin=4mm]\n\nLR + Bag-of-words: We aggregate the tweets of a feed and we use a bag-of-words representation with a logistic regression (LR) classifier.\n\nTweet2vec: We use the Bidirectional Gated recurrent neural network model proposed in BIBREF20.",
"LR + All Features (tweet-level): We extract all our features from each tweet and feed them into a LR classifier. ",
"LR + All Features (chunk-level): We concatenate the features' vectors of the tweets in a chunk and feed them into a LR classifier.\n\nFacTweet (tweet-level): Similar to the FacTweet approach, but at tweet-level; the sequential flow of the tweets is not utilized. ",
"Top-$k$ replies, likes, or re-tweets: Some approaches in rumors detection use the number of replies, likes, and re-tweets to detect rumors BIBREF21."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe"
]
}
],
"nlp_background": [
"two",
"two",
"two",
"two",
"two",
"two",
"infinity",
"infinity",
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no",
"no",
"no",
"no",
"no",
"no",
"no"
],
"question": [
"How did they obtain the dataset?",
"What activation function do they use in their model?",
"What baselines do they compare to?",
"How are chunks defined?",
"What features are extracted?",
"How many layers does their model have?",
"Was the approach used in this work to detect fake news fully supervised?",
"Based on this paper, what is the more predictive set of features to detect fake news?",
"How big is the dataset used in this work?",
"How is a \"chunk of posts\" defined in this work?",
"What baselines were used in this work?"
],
"question_id": [
"e35c2fa99d5c84d8cb5d83fca2b434dcd83f3851",
"c00ce1e3be14610fb4e1f0614005911bb5ff0302",
"71fe5822d9fccb1cb391c11283b223dc8aa1640c",
"97d0f9a1540a48e0b4d30d7084a8c524dd09a4c3",
"1062a0506c3691a93bb914171c2701d2ae9621cb",
"8e12b5c459fa963b3e549deadb864c244879fe82",
"483a699563efcb8804e1861b18809279f21c7610",
"d3ff2986ca8cb85a9a5cec039c266df756947b43",
"3e1829e96c968cbd8ad8e9ce850e3a92a76b26e4",
"2317ca8d475b01f6632537b95895608dc40c4415",
"3e88fb3d28593309a307eb97e875575644a01463"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"fa716cd87ce6fd6905e2f23f09b262e90413167f",
"fa716cd87ce6fd6905e2f23f09b262e90413167f",
"fa716cd87ce6fd6905e2f23f09b262e90413167f",
"fa716cd87ce6fd6905e2f23f09b262e90413167f",
"fa716cd87ce6fd6905e2f23f09b262e90413167f"
],
"search_query": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar",
"familiar",
"familiar",
"familiar",
"familiar",
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Fig. 1: The FacTweet’s architecture.",
"Table 1: Statistics on the data with respect to each account type: propaganda (P), clickbait (C), hoax (H), and real news (R).",
"Fig. 3: The FacTweet performance on difference chunk sizes.",
"Table 2: FacTweet: experimental results."
],
"file": [
"3-Figure1-1.png",
"4-Table1-1.png",
"5-Figure3-1.png",
"6-Table2-1.png"
]
} | [
"What activation function do they use in their model?",
"How are chunks defined?",
"How big is the dataset used in this work?",
"How is a \"chunk of posts\" defined in this work?"
] | [
[
"1910.06592-Experiments and Results-11"
],
[
"1910.06592-Methodology-0",
"1910.06592-Methodology-2",
"1910.06592-Introduction-1"
],
[
"1910.06592-4-Table1-1.png",
"1910.06592-Experiments and Results-0"
],
[
"1910.06592-Methodology-0",
"1910.06592-Methodology-2",
"1910.06592-Introduction-1"
]
] | [
"Activation function is hyperparameter. Possible values: relu, selu, tanh.",
"sequence of $s$ tweets",
"212 accounts",
"sequence of $s$ tweets"
] | 124 |
1706.01678 | Text Summarization using Abstract Meaning Representation | With an ever increasing size of text present on the Internet, automatic summary generation remains an important problem for natural language understanding. In this work we explore a novel full-fledged pipeline for text summarization with an intermediate step of Abstract Meaning Representation (AMR). The pipeline proposed by us first generates an AMR graph of an input story, through which it extracts a summary graph and finally, generate summary sentences from this summary graph. Our proposed method achieves state-of-the-art results compared to the other text summarization routines based on AMR. We also point out some significant problems in the existing evaluation methods, which make them unsuitable for evaluating summary quality. | {
"paragraphs": [
[
"Summarization of large texts is still an open problem in language processing. People nowadays have lesser time and patience to go through large pieces of text which make automatic summarization important. Automatic summarization has significant applications in summarizing large texts like stories, journal papers, news articles and even larger texts like books.",
"Existing methods for summarization can be broadly categorized into two categories Extractive and Abstractive. Extractive methods picks up words and sometimes directly sentences from the text. These methods are inherently limited in the sense that they can never generate human level summaries for large and complicated documents which require rephrasing sentences and incorporating information from full text to generate summaries. Most of the work done on summarization in past has been extractive.",
"On the other hand most Abstractive methods take advantages of the recent developments in deep learning. Specifically the recent success of the sequence to sequence learning models where recurrent networks read the text, encodes it and then generate target text. Though these methods have recently shown to be competitive with the extractive methods they are still far away from reaching human level quality in summary generation.",
"The work on summarization using AMR was started by BIBREF0 . Abstract Meaning Representation (AMR) was as introduced by BIBREF1 . AMR focuses on capturing the meaning of the text, by giving a specific meaning representation to the text. AMR tries to capture the \"who is doing what to whom\" in a sentence. The formalism aims to give same representation to sentences which have the same underlying meaning. For example \"He likes apple\" and \"Apples are liked by him\" should be assigned the same AMR.",
" BIBREF0 's approach aimed to produce a summary for a story by extracting a summary subgraph from the story graph and finally generate a summary from this extracted graph. But, because of the unavailability of AMR to text generator at that time their work was limited till extracting the summary graph. This method extracts a single summary graph from the story graph. Extracting a single summary graph assumes that all of the important information from the graph can be extracted from a single subgraph. But, it can be difficult in cases where the information is spread out in the graph. Thus, the method compromises between size of the summary sub-graph and the amount of information it can extract. This can be easily solved if instead of a single sub-graph, we extract multiple subgraphs each focusing on information in a different part of the story.",
"We propose a two step process for extracting multiple summary graphs. First step is to select few sentences from the story. We use the idea that there are only few sentences that are important from the point of view of summary, i.e. most of the information contained in the summary is present in very few sentences and they can be used to generate the summary. Second step is to extract important information from the selected sentences by extracting a sub-graph from the selected sentences.",
"Our main contributions in this work are three folds,",
"The rest of the paper is organized as follows. Section SECREF2 contains introduction to AMR, section SECREF3 and SECREF4 contains the datasets and the algorithm used for summary generation respectively. Section SECREF5 has a detailed step-by-step evaluation of the pipeline and in section SECREF6 we discuss the problems with the current dataset and evaluation metric."
],
[
"AMR was introduced by BIBREF1 with the aim to induce work on statistical Natural Language Understanding and Generation. AMR represents meaning using graphs. AMR graphs are rooted, directed, edge and vertex labeled graphs. Figure FIGREF4 shows the graphical representation of the AMR graph of the sentence \"I looked carefully all around me\" generated by JAMR parser ( BIBREF2 ). The graphical representation was produced using AMRICA BIBREF3 . The nodes in the AMR are labeled with concepts as in Figure FIGREF4 around represents a concept. Edges contains the information regarding the relations between the concepts. In Figure FIGREF4 direction is the relation between the concepts look-01 and around. AMR relies on Propbank for semantic relations (edge labels). Concepts can also be of the form run-01 where the index 01 represents the first sense of the word run. Further details about the AMR can be found in the AMR guidelines BIBREF4 .",
"A lot of work has been done on parsing sentences to their AMR graphs. There are three main approaches to parsing. There is alignment based parsing BIBREF2 (JAMR-Parser), BIBREF5 which uses graph based algorithms for concept and relation identification. Second, grammar based parsers like BIBREF6 (CAMR) generate output by performing shift reduce transformations on output of a dependency parser. Neural parsing BIBREF7 , BIBREF8 is based on using seq2seq models for parsing, the main problem for neural methods is the absence of a huge corpus of human generated AMRs. BIBREF8 reduced the vocabulary size to tackle this problem while BIBREF7 used larger external corpus of external sentences.",
"Recently, some work has been done on producing meaningful sentences form AMR graphs. BIBREF2 used a number of tree to string conversion rules for generating sentences. BIBREF9 reformed the problem as a traveling salesman problem. BIBREF7 used seq2seq learning methods."
],
[
"We used two datasets for the task - AMR Bank BIBREF10 and CNN-Dailymail ( BIBREF11 BIBREF12 ). We use the proxy report section of the AMR Bank, as it is the only one that is relevant for the task because it contains the gold-standard (human generated) AMR graphs for news articles, and the summaries. In the training set the stories and summaries contain 17.5 sentences and 1.5 sentences on an average respectively. The training and test sets contain 298 and 33 summary document pairs respectively.",
"CNN-Dailymail corpus is better suited for summarization as the average summary size is around 3 or 4 sentences. This dataset has around 300k document summary pairs with stories having 39 sentences on average. The dataset comes in 2 versions, one is the anonymized version, which has been preprocessed to replace named entities, e.g., The Times of India, with a unique identifier for example @entity1. Second is the non-anonymized which has the original text. We use the non-anonymized version of the dataset as it is more suitable for AMR parsing as most of the parsers have been trained on non-anonymized text. The dataset does not have gold-standard AMR graphs. We use automatic parsers to get the AMR graphs but they are not gold-standard and will effect the quality of final summary. To get an idea of the error introduced by using automatic parsers, we compare the results after using gold-standard and automatically generated AMR graphs on the gold-standard dataset."
],
[
"The pipeline consists of three steps, first convert all the given story sentences to there AMR graphs followed by extracting summary graphs from the story sentence graphs and finally generating sentences from these extracted summary graphs. In the following subsections we explain each of the methods in greater detail."
],
[
"As the first step we convert the story sentences to their Abstract Meaning Representations. We use JAMR-Parser version 2 BIBREF2 as it’s openly available and has a performance close to the state of the art parsers for parsing the CNN-Dailymail corpus. For the AMR-bank we have the gold-standard AMR parses but we still parse the input stories with JAMR-Parser to study the effect of using graphs produced by JAMR-Parser instead of the gold-standard AMR graphs."
],
[
"After parsing (Step 1) we have the AMR graphs for the story sentences. In this step we extract the AMR graphs of the summary sentences using story sentence AMRs. We divide this task in two parts. First is finding the important sentences from the story and then extracting the key information from those sentences using their AMR graphs.",
"Our algorithm is based on the idea that only few sentences are important from the point of view of summary i.e. there are only a few sentences which contain most of the important information and from these sentences we can generate the summary.",
"Hypothesis: Most of the information corresponding to a summary sentence can be found in only one sentence from the story.",
"To test this hypothesis, for each summary sentence we find the sentence from the story that contains maximum information of this summary sentence. We use ROGUE-1 BIBREF13 Recall scores (measures the ratio of number of words in the target summary that are contained in the predicted summary to the total number of words in the target summary) as the metric for the information contained in the story sentence. We consider the story sentence as the predicted summary and the summary sentence as the target summary. The results that we obtained for 5000 randomly chosen document summary pairs from the CNN-Dailymail corpus are given in figure FIGREF8 . The average recall score that we obtained is 79%. The score will be perfectly 1 when the summary sentence is directly picked up from a story sentence. Upon manual inspection of the summary sentence and the corresponding best sentence from the story we realized, when this score is more than 0.5 or 0.6, almost always the information in the summary sentence is contained in this chosen story sentence. The score for in these cases is not perfectly 1 because of stop words and different verb forms used in story and summary sentence. Around 80% of summary sentences have score above 0.5. So, our hypothesis seems to be correct for most of the summary sentences. This also suggests the highly extractive nature of the summary in the corpus.",
"Now the task in hand is to select few important sentences. Methods that use sentence extraction for summary generation can be used for the task. It is very common in summarization tasks specifically in news articles that a lot of information is contained in the initial few sentences. Choosing initial few sentences as the summary produces very strong baselines which the state-of-the-art methods beat only marginally. Ex. On the CNN-Dailymail corpus the state-of-the-art extractive method beats initial 3 sentences only by 0.4% as reported by BIBREF14 .",
"",
"Using this idea of picking important sentences from the beginning, we propose two methods, first is to simply pick initial few sentences, we call this first-n method where n stands for the number of sentences. We pick initial 3 sentences for the CNN-Dailymail corpus i.e. first-3 and only the first sentence for the proxy report section (AMR Bank) i.e. first-1 as they produce the best scores on the ROGUE metric compared to any other first-n. Second, we try to capture the relation between the two most important entities (we define importance by the number of occurrences of the entity in the story) of the document. For this we simply find the first sentence which contains both these entities. We call this the first co-occurrence based sentence selection. We also select the first sentence along with first co-occurrence based sentence selection as the important sentences. We call this the first co-occurrence+first based sentence selection.",
"As the datasets under consideration are news articles. The most important information in them is about an entity and a verb associated with it. So, to extract important information from the sentence. We try to find the entity being talked about in the sentence, we consider the most referred entity (one that occurs most frequently in the text), now for the main verb associated with the entity in the sentence, we find the verb closest to this entity in the AMR graph. We define the closest verb as the one which lies first in the path from the entity to the root.",
"We start by finding the position of the most referred entity in the graph, then we find the closest verb to the entity. and finally select the subtree hanging from that verb as the summary AMR."
],
[
"To generate sentences from the extracted AMR graphs we can use already available generators. We use Neural AMR ( BIBREF7 ) as it provides state of the art results in sentence generation. We also use BIBREF15 (JAMR-Generator) in one of the experiments in the next section. Generators significantly effect the results, we will analyze the effectiveness of generator in the next section."
],
[
"In this section, we present the baseline models and analysis method used for each step of our pipeline.",
"For the CNN-Dailymail dataset, the Lead-3 model is considered a strong baseline; both the abstractive BIBREF16 and extractive BIBREF14 state-of-the art methods on this dataset beat this baseline only marginally. The Lead-3 model simply produces the leading three sentences of the document as its summary.",
"The key step in our pipeline is step-2 i.e. summary graph extraction.",
"Directly comparing the Lead-3 baseline, with AMR based pipeline to evaluate the effectiveness of step-2 is an unfair comparison because of the errors introduced by imperfect parser and generator in the AMR pipeline. Thus to evaluate the effectiveness of step-2 against Lead-3 baseline, we need to nullify the effect of errors introduce by AMR parser and generator. We achieve this by trying to introduce similar errors in the leading thre sentences of each document. We generate the AMR graphs of the leading three sentences and then generate the sentences using these AMR graph. We use parser and generator that were used in our pipeline. We consider these generated sentences as the new baseline summary, we shall now refer to it Lead-3-AMR baseline in the remaining of the paper.",
"For the proxy report section of the AMR bank, we consider the Lead-1-AMR model as the baseline. For this dataset we already have the gold-standard AMR graphs of the sentences. Therefore, we only need to nullify the error introduced by the generator."
],
[
"For the evaluation of summaries we use the standard ROGUE metric. For comparison with previous AMR based summarization methods, we report the Recall, Precision and INLINEFORM0 scores for ROGUE-1. Since most of the literature on summarization uses INLINEFORM1 scores for ROGUE-2 and ROGUE-L for comparison, we also report INLINEFORM2 scores for ROGUE-2 and ROGUE-L for our method. ROGUE-1 Recall and Precision are measured for uni-gram overlap between the reference and the predicted summary. On the other hand, ROGUE-2 uses bi-gram overlap while ROGUE-L uses the longest common sequence between the target and the predicted summaries for evaluation. In rest of this section, we provide methods to analyze and evaluate our pipeline at each step.",
"Step-1: AMR parsing To understand the effects of using an AMR parser on the results, we compare the final scores after the following two cases- first, when we use the gold-standard AMR graphs and second when we used the AMR graphs generated by JAMR-Parser in the pipeline. Section SECREF19 contains a comparison between the two.",
"Step-2: Summary graph extraction For evaluating the effectiveness of the summary graph extraction step we compare the final scores with the Lead-n-AMR baselines described in section SECREF12 .",
"In order to compare our summary graph extraction step with the previous work ( BIBREF0 ), we generate the final summary using the same generation method as used by them. Their method uses a simple module based on alignments for generating summary after step-2. The alignments simply map the words in the original sentence with the node or edge in the AMR graph. To generate the summary we find the words aligned with the sentence in the selected graph and output them in no particular order as the predicted summary. Though this does not generate grammatically correct sentences, we can still use the ROGUE-1 metric similar to BIBREF0 , as it is based on comparing uni-grams between the target and predicted summaries.",
"Step-3: Generation For evaluating the quality of the sentences generated by our method, we compare the summaries generated by the first-1 model and Lead-1-AMR model on the gold-standard dataset. However, when we looked at the scores given by ROGUE, we decided to do get the above summaries evaluated by humans. This produced interesting results which are given in more detail in section SECREF20 ."
],
[
"In table TABREF13 we report the results of using the pipeline with generation using the alignment based generation module defined in section SECREF12 , on the proxy report section of the AMR Bank. All of our methods out-perform BIBREF0 's method. We obtain best ROGUE-1 INLINEFORM0 scores using the first co-occurrence+first model for important sentences. This also out-perform our Lead-1-AMR baseline by 0.3 ROGUE-1 INLINEFORM1 points."
],
[
"In this subsection we analyze the effect of using JAMR parser for step-1 instead of the gold-standard AMR graphs. First part of table TABREF14 has scores after using the gold-standard AMR graphs. In the second part of table TABREF14 we have included the scores of using the JAMR parser for AMR graph generation. We have used the same Neural AMR for sentence generation in all methods. Scores of all methods including the Lead-1-AMR baseline have dropped significantly.",
"The usage of JAMR Parser has affected the scores of first co-occurrence+first and first-1 more than that for the Lead-1-AMR. The drop in ROGUE INLINEFORM0 score when we use first co-occurrence+first is around two ROGUE INLINEFORM1 points more than when Lead-1-AMR. This is a surprising result, and we believe that it is worthy of further research."
],
[
"In this subsection we evaluate the effectiveness of the sentence generation step. For fair comparison at the generation step we use the gold-standard AMRs and don't perform any extraction in step-2 instead we use full AMRs, this allows to remove any errors that might have been generated in step-1 and step-2. In order to compare the quality of sentences generated by the AMR, we need a gold-standard for sentence generation step. For this, we simply use the original sentence as gold-standard for sentence generation. Thus, we compare the quality of summary generated by Lead-1 and Lead-1-AMR. The scores using the ROGUE metric are given in bottom two rows of table TABREF17 . The results show that there is significant drop in Lead-1-AMR when compared to Lead-1.",
"We perform human evaluation to check whether the drop in ROGUE scores is because of drop in information contained, and human readability or is it because of the inability of the ROGUE metric to judge. To perform this evaluation we randomly select ten test examples from the thirty- three test cases of the proxy report section. For each example, we show the summaries generated by four different models side by side to the human evaluators. The human evaluator does not know which summaries come from which model. A score from 1 to 10 is then assigned to each summary on the basis of readability, and information contained of summary, where 1 corresponds to the lower level and 10 to the highest. In table TABREF17 we compare the scores of these four cases as given by ROGUE along with human evaluation. The parser-generator pairs for the four cases are gold-JAMR(generator), JAMR(parser)-neural, gold-neural, and the original sentence respectively. Here gold parser means that we have used the gold-standard AMR graphs.",
"The scores given by the humans do not correlate with ROGUE. Human evaluators gives almost similarly scores to summary generated by the Lead-1 and Lead-1-AMR with Lead-1-AMR actually performing better on readability though it dropped some information as clear from the scores on information contained. On the other hand, ROGUE gives very high score to Lead-1 while models 1,2 and 4 get almost same scores. The similar scores of model 2 and 3 shows that generators are actually producing meaningful sentences. Thus the drop in ROGUE scores is mainly due to the inability of the ROGUE to evaluate abstractive summaries. Moreover, the ROGUE gives model 4 higher score compared to model 1 while human evaluators give the opposite scores on information contained in the sentence.",
"A possible reason for the inability of the ROGUE metric to properly evaluate the summaries generated by our method might be due to its inability to evaluate restructured sentences. AMR formalism tries to assign the same AMR graphs to the sentences that have same meaning so there exists a one-to-many mapping from AMR graphs to sentences. This means that the automatic generators that we are using might not be trying to generate the original sentence; instead it is trying to generate some other sentence that has the same underlying meaning. This also helps in explaining the low ROGUE-2 and ROGUE-L scores. If the sentences might be getting rephrased, they would loose most of the bi- and tri-grams from the original sentence resulting in low ROGUE-2 and ROGUE-L scores."
],
[
"The aim of extracting summary graphs from the AMR graphs of the sentence is to drop the not so important information from the sentences. If we are able to achieve this perfectly, the ROGUE-1 Recall scores that we are getting should remain almost the same (since we are not add any new information) and the ROGUE-1 precision should go up (as we have thrown out some useless information); thus effectively improving the overall ROGUE-1 INLINEFORM0 score. In the first two rows of table TABREF14 we have the scores after using the full-AMR and extracted AMR for generation respectively. It is safe to say that extracting the AMR results in improved ROGUE-1 precision whereas ROGUE-1 Recall reduces only slightly, resulting in an overall improved ROGUE-1 INLINEFORM1 ."
],
[
"In table TABREF18 we report the results on the CNN-Dailymail corpus. We present scores by using the first-3 model. The first row contains the Lead-3-AMR baseline. The results we achieve are competitive with the Lead-3-AMR baseline. The rest of the table contains scores of Lead-3 baseline followed by the state-of-the-art method on the anonymized and non-anonymized versions of the dataset. The drop in the scores from the Lead-3(non-anonymized) to Lead-3-AMR is significant and is largely because of the error introduced by parser and generator."
],
[
" BIBREF18 showed that most of the work in text summarization has been extractive, where sentences are selected from the text which are then concatenated to form a summary. BIBREF19 transformed the input to nodes, then used the Pagerank algorithm to score nodes, and finally grow the nodes from high-value to low-value using some heuristics. Some of the approaches combine this with sentence compression, so more sentences can be packed in the summary. BIBREF20 , BIBREF21 , BIBREF22 , and BIBREF23 among others used ILPs and approximations for encoding compression and extraction.",
"Recently some abstractive approaches have also been proposed most of which used sequence to sequence learning models for the task. BIBREF24 , BIBREF25 , BIBREF12 , BIBREF17 used standard encoder-decoder models along with their variants to generate summaries. BIBREF26 incorporated the AMR information in the standard encoder-decoder models to improve results. Our work in similar to other graph based abstractive summarization methods BIBREF27 and BIBREF28 . BIBREF27 used dependency parse trees to produce summaries. On the other hand our work takes advantage of semantic graphs."
],
[
"ROGUE metric, by it is design has lots of properties that make it unsuitable for evaluating abstractive summaries. For example, ROGUE matches exact words and not the stems of the words, it also considers stop words for evaluation. One of the reasons why ROGUE like metrics might never become suitable for evaluating abstractive summaries is its incapabilities of knowing if the sentences have been restructured. A good evaluation metric should be one where we compare the meaning of the sentence and not the exact words. As we showed section SECREF20 ROGUE is not suitable for evaluating summaries generated by the AMR pipeline.",
"We now show why the CNN-Dailymail corpus is not suitable for Abstractive summarization. The nature of summary points in the corpus is highly extractive (Section SECREF7 for details) where most of the summary points are simply picked up from some sentences in the story. Tough, this is a good enough reason to start searching for better dataset, it is not the biggest problem with the dataset. The dataset has the property that a lot of important information is in the first few sentences and most of the summary points are directly pick from these sentences. The extractive methods based on sentence selection like SummaRunNer are not actually performing well, the results they have got are only slightly better than the Lead-3 baseline. The work doesn't show how much of the selected sentences are among the first few and it might be the case that the sentences selected by the extractive methods are mostly among the first few sentences, the same can be the problem with the abstractive methods, where most the output might be getting copied from the initial few sentences.",
"These problems with this corpus evoke the need to have another corpus where we don't have so much concentration of important information at any location but rather the information is more spread out and the summaries are more abstractive in nature."
],
[
"As this proposed algorithm is a step by step process we can focus on improving each step to produce better results. The most exciting improvements can be done in the summary graph extraction method. Not a lot of work has been done to extract AMR graphs for summaries. In order to make this pipeline generalizable for any sort of text, we need to get rid of the hypothesis that the summary is being extracted exactly from one sentence. So, the natural direction seems to be joining AMR graphs of multiple sentences that are similar and then extracting the summary AMR from that large graph. It will be like clustering similar sentences and then extracting a summary graph from each of these cluster. Another idea is to use AMR graphs for important sentence selection."
],
[
"In this work we have explored a full-fledged pipeline using AMR for summarization for the first time. We propose a new method for extracting summary graph, which outperformed previous methods. Overall we provide strong baseline for text summarization using AMR for possible future works. We also showed that ROGUE can't be used for evaluating the abstractive summaries generated by our AMR pipeline."
]
],
"section_name": [
"Introduction",
"Background: AMR Parsing and Generation",
"Datasets",
"Pipeline for Summary Generation",
"Step 1: Story to AMR",
"Step 2: Story AMR to Summary AMR",
"Step 3: Summary Generation",
"Baselines",
"Procedure to Analyze and Evaluate each step",
"Results on the Proxy report section",
"Effects of using JAMR Parser",
"Effectiveness of the Generator",
"Analyzing the effectiveness of AMR extraction",
"Results on the CNN-Dailymail corpus",
"Related Work",
"Need of an new Dataset and Evaluation Metric",
"Possible Future Directions",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"a9cea0bb56287f05e72342d1de4ee746efaeb334"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"4076812e28527a7420bf14a08a2d8fb480f5463b",
"9e617c3ad1408e20b6e2ebb7b16643792d47a0a4"
],
"answer": [
{
"evidence": [
"For the evaluation of summaries we use the standard ROGUE metric. For comparison with previous AMR based summarization methods, we report the Recall, Precision and INLINEFORM0 scores for ROGUE-1. Since most of the literature on summarization uses INLINEFORM1 scores for ROGUE-2 and ROGUE-L for comparison, we also report INLINEFORM2 scores for ROGUE-2 and ROGUE-L for our method. ROGUE-1 Recall and Precision are measured for uni-gram overlap between the reference and the predicted summary. On the other hand, ROGUE-2 uses bi-gram overlap while ROGUE-L uses the longest common sequence between the target and the predicted summaries for evaluation. In rest of this section, we provide methods to analyze and evaluate our pipeline at each step."
],
"extractive_spans": [],
"free_form_answer": "Quantitative evaluation methods using ROUGE, Recall, Precision and F1.",
"highlighted_evidence": [
"For the evaluation of summaries we use the standard ROGUE metric. For comparison with previous AMR based summarization methods, we report the Recall, Precision and INLINEFORM0 scores for ROGUE-1."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"For the evaluation of summaries we use the standard ROGUE metric. For comparison with previous AMR based summarization methods, we report the Recall, Precision and INLINEFORM0 scores for ROGUE-1. Since most of the literature on summarization uses INLINEFORM1 scores for ROGUE-2 and ROGUE-L for comparison, we also report INLINEFORM2 scores for ROGUE-2 and ROGUE-L for our method. ROGUE-1 Recall and Precision are measured for uni-gram overlap between the reference and the predicted summary. On the other hand, ROGUE-2 uses bi-gram overlap while ROGUE-L uses the longest common sequence between the target and the predicted summaries for evaluation. In rest of this section, we provide methods to analyze and evaluate our pipeline at each step."
],
"extractive_spans": [
"standard ROGUE metric",
"Recall, Precision and INLINEFORM0 scores for ROGUE-1",
" INLINEFORM2 scores for ROGUE-2 and ROGUE-L"
],
"free_form_answer": "",
"highlighted_evidence": [
"For the evaluation of summaries we use the standard ROGUE metric. For comparison with previous AMR based summarization methods, we report the Recall, Precision and INLINEFORM0 scores for ROGUE-1. Since most of the literature on summarization uses INLINEFORM1 scores for ROGUE-2 and ROGUE-L for comparison, we also report INLINEFORM2 scores for ROGUE-2 and ROGUE-L for our method."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"1409fd0c7b243b821eb0ac43d5de5971317591a4",
"8822cb2f8feda7ecefae1e9fbfa76c0c196286c7"
],
"answer": [
{
"evidence": [
"We used two datasets for the task - AMR Bank BIBREF10 and CNN-Dailymail ( BIBREF11 BIBREF12 ). We use the proxy report section of the AMR Bank, as it is the only one that is relevant for the task because it contains the gold-standard (human generated) AMR graphs for news articles, and the summaries. In the training set the stories and summaries contain 17.5 sentences and 1.5 sentences on an average respectively. The training and test sets contain 298 and 33 summary document pairs respectively."
],
"extractive_spans": [
"AMR Bank",
"CNN-Dailymail"
],
"free_form_answer": "",
"highlighted_evidence": [
"We used two datasets for the task - AMR Bank BIBREF10 and CNN-Dailymail ( BIBREF11 BIBREF12 ). "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We used two datasets for the task - AMR Bank BIBREF10 and CNN-Dailymail ( BIBREF11 BIBREF12 ). We use the proxy report section of the AMR Bank, as it is the only one that is relevant for the task because it contains the gold-standard (human generated) AMR graphs for news articles, and the summaries. In the training set the stories and summaries contain 17.5 sentences and 1.5 sentences on an average respectively. The training and test sets contain 298 and 33 summary document pairs respectively."
],
"extractive_spans": [
"AMR Bank BIBREF10",
"CNN-Dailymail ( BIBREF11 BIBREF12 )"
],
"free_form_answer": "",
"highlighted_evidence": [
"We used two datasets for the task - AMR Bank BIBREF10 and CNN-Dailymail ( BIBREF11 BIBREF12 )."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"3d31df2c5380865189175ecf356893565f2e4e95",
"8ea7270a6678afc65e8263fbef79024f4172192c"
],
"answer": [
{
"evidence": [
"For the CNN-Dailymail dataset, the Lead-3 model is considered a strong baseline; both the abstractive BIBREF16 and extractive BIBREF14 state-of-the art methods on this dataset beat this baseline only marginally. The Lead-3 model simply produces the leading three sentences of the document as its summary.",
"For the proxy report section of the AMR bank, we consider the Lead-1-AMR model as the baseline. For this dataset we already have the gold-standard AMR graphs of the sentences. Therefore, we only need to nullify the error introduced by the generator."
],
"extractive_spans": [
"Lead-3",
"Lead-1-AMR"
],
"free_form_answer": "",
"highlighted_evidence": [
"For the CNN-Dailymail dataset, the Lead-3 model is considered a strong baseline; both the abstractive BIBREF16 and extractive BIBREF14 state-of-the art methods on this dataset beat this baseline only marginally.",
"For the proxy report section of the AMR bank, we consider the Lead-1-AMR model as the baseline."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"For the CNN-Dailymail dataset, the Lead-3 model is considered a strong baseline; both the abstractive BIBREF16 and extractive BIBREF14 state-of-the art methods on this dataset beat this baseline only marginally. The Lead-3 model simply produces the leading three sentences of the document as its summary.",
"For the proxy report section of the AMR bank, we consider the Lead-1-AMR model as the baseline. For this dataset we already have the gold-standard AMR graphs of the sentences. Therefore, we only need to nullify the error introduced by the generator.",
"In order to compare our summary graph extraction step with the previous work ( BIBREF0 ), we generate the final summary using the same generation method as used by them. Their method uses a simple module based on alignments for generating summary after step-2. The alignments simply map the words in the original sentence with the node or edge in the AMR graph. To generate the summary we find the words aligned with the sentence in the selected graph and output them in no particular order as the predicted summary. Though this does not generate grammatically correct sentences, we can still use the ROGUE-1 metric similar to BIBREF0 , as it is based on comparing uni-grams between the target and predicted summaries."
],
"extractive_spans": [
"Lead-3 model",
" Lead-1-AMR",
"BIBREF0 "
],
"free_form_answer": "",
"highlighted_evidence": [
"For the CNN-Dailymail dataset, the Lead-3 model is considered a strong baseline; both the abstractive BIBREF16 and extractive BIBREF14 state-of-the art methods on this dataset beat this baseline only marginally.",
"For the proxy report section of the AMR bank, we consider the Lead-1-AMR model as the baseline. ",
"In order to compare our summary graph extraction step with the previous work ( BIBREF0 ), we generate the final summary using the same generation method as used by them."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"013f2920c46c204bdbe2e396c9cc6e1496d81bdc",
"bdf959e26acb032db525432772e44b1770aadf70"
],
"answer": [
{
"evidence": [
"After parsing (Step 1) we have the AMR graphs for the story sentences. In this step we extract the AMR graphs of the summary sentences using story sentence AMRs. We divide this task in two parts. First is finding the important sentences from the story and then extracting the key information from those sentences using their AMR graphs."
],
"extractive_spans": [
" finding the important sentences from the story",
"extracting the key information from those sentences using their AMR graphs"
],
"free_form_answer": "",
"highlighted_evidence": [
"In this step we extract the AMR graphs of the summary sentences using story sentence AMRs. We divide this task in two parts. First is finding the important sentences from the story and then extracting the key information from those sentences using their AMR graphs."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Using this idea of picking important sentences from the beginning, we propose two methods, first is to simply pick initial few sentences, we call this first-n method where n stands for the number of sentences. We pick initial 3 sentences for the CNN-Dailymail corpus i.e. first-3 and only the first sentence for the proxy report section (AMR Bank) i.e. first-1 as they produce the best scores on the ROGUE metric compared to any other first-n. Second, we try to capture the relation between the two most important entities (we define importance by the number of occurrences of the entity in the story) of the document. For this we simply find the first sentence which contains both these entities. We call this the first co-occurrence based sentence selection. We also select the first sentence along with first co-occurrence based sentence selection as the important sentences. We call this the first co-occurrence+first based sentence selection."
],
"extractive_spans": [],
"free_form_answer": " Two methods: first is to simply pick initial few sentences, second is to capture the relation between the two most important entities (select the first sentence which contains both these entities).",
"highlighted_evidence": [
"Using this idea of picking important sentences from the beginning, we propose two methods, first is to simply pick initial few sentences, we call this first-n method where n stands for the number of sentences. We pick initial 3 sentences for the CNN-Dailymail corpus i.e. first-3 and only the first sentence for the proxy report section (AMR Bank) i.e. first-1 as they produce the best scores on the ROGUE metric compared to any other first-n. Second, we try to capture the relation between the two most important entities (we define importance by the number of occurrences of the entity in the story) of the document. For this we simply find the first sentence which contains both these entities. We call this the first co-occurrence based sentence selection. We also select the first sentence along with first co-occurrence based sentence selection as the important sentences. We call this the first co-occurrence+first based sentence selection."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
],
"nlp_background": [
"",
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
"",
""
],
"question": [
"What is the performance of their method?",
"Which evaluation methods are used?",
"What dataset is used in this paper?",
"Which other methods do they compare with?",
"How are sentences selected from the summary graph?"
],
"question_id": [
"0767ca8ff1424f7a811222ca108a33b6411aaa8a",
"e8f969ffd637b82d04d3be28c51f0f3ca6b3883e",
"46227b4265f1d300a5ed71bf40822829de662bc2",
"a6a48de63c1928238b37c2a01c924b852fe752f8",
"b65a83a24fc66728451bb063cf6ec50134c8bfb0"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"",
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
"",
""
]
} | {
"caption": [
"Figure 1: The graphical representation of the parse of the sentence : ’I looked carefully all around me’ using AMRICA",
"Figure 2: Graph of the best Rogue-1 recall scores for 5000 summaries (around 20000 sentences) in the CNN-Dailymail corpus. Y-axis is the ROGUE score and X-axis is the cumulative percentage of sentence with the corresponding score",
"Table 1: Results of various algorithms after step 2 (AMR extraction)",
"Table 2: Results of various algorithms after step 3 - summary generation using JAMR)",
"Table 3: Comparison of rogue scores on the original best sentences with the sentences after generations with there gold-standard AMR",
"Table 4: Results on the CNN/Dailymail corpus",
"Figure 3: The graphical representation of the AMR graph of the sentence : ’Absurd as it might seem to me , a thousand miles from any human habitation and in danger of death , I took out of my pocket a sheet of paper and my fountain - pen’",
"Figure 4: The graphical representation of the truncated AMR graph of the sentence : ’Absurd as it might seem to me , a thousand miles from any human habitation and in danger of death , I took out of my pocket a sheet of paper and my fountain - pen’"
],
"file": [
"2-Figure1-1.png",
"3-Figure2-1.png",
"5-Table1-1.png",
"5-Table2-1.png",
"5-Table3-1.png",
"6-Table4-1.png",
"9-Figure3-1.png",
"10-Figure4-1.png"
]
} | [
"Which evaluation methods are used?",
"How are sentences selected from the summary graph?"
] | [
[
"1706.01678-Procedure to Analyze and Evaluate each step-0"
],
[
"1706.01678-Step 2: Story AMR to Summary AMR-0",
"1706.01678-Step 2: Story AMR to Summary AMR-6"
]
] | [
"Quantitative evaluation methods using ROUGE, Recall, Precision and F1.",
" Two methods: first is to simply pick initial few sentences, second is to capture the relation between the two most important entities (select the first sentence which contains both these entities)."
] | 125 |
1902.09666 | Predicting the Type and Target of Offensive Posts in Social Media | As offensive content has become pervasive in social media, there has been much research in identifying potentially offensive messages. However, previous work on this topic did not consider the problem as a whole, but rather focused on detecting very specific types of offensive content, e.g., hate speech, cyberbulling, or cyber-aggression. In contrast, here we target several different kinds of offensive content. In particular, we model the task hierarchically, identifying the type and the target of offensive messages in social media. For this purpose, we complied the Offensive Language Identification Dataset (OLID), a new dataset with tweets annotated for offensive content using a fine-grained three-layer annotation scheme, which we make publicly available. We discuss the main similarities and differences between OLID and pre-existing datasets for hate speech identification, aggression detection, and similar tasks. We further experiment with and we compare the performance of different machine learning models on OLID. | {
"paragraphs": [
[
"Offensive content has become pervasive in social media and a reason of concern for government organizations, online communities, and social media platforms. One of the most common strategies to tackle the problem is to train systems capable of recognizing offensive content, which then can be deleted or set aside for human moderation. In the last few years, there have been several studies published on the application of computational methods to deal with this problem. Most prior work focuses on a different aspect of offensive language such as abusive language BIBREF0 , BIBREF1 , (cyber-)aggression BIBREF2 , (cyber-)bullying BIBREF3 , BIBREF4 , toxic comments INLINEFORM0 , hate speech BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , and offensive language BIBREF11 . Prior work has focused on these aspects of offensive language in Twitter BIBREF3 , BIBREF7 , BIBREF8 , BIBREF11 , Wikipedia comments, and Facebook posts BIBREF2 .",
"Recently, Waseem et. al. ( BIBREF12 ) acknowledged the similarities among prior work and discussed the need for a typology that differentiates between whether the (abusive) language is directed towards a specific individual or entity or towards a generalized group and whether the abusive content is explicit or implicit. Wiegand et al. ( BIBREF11 ) followed this trend as well on German tweets. In their evaluation, they have a task to detect offensive vs not offensive tweets and a second task for distinguishing between the offensive tweets as profanity, insult, or abuse. However, no prior work has explored the target of the offensive language, which is important in many scenarios, e.g., when studying hate speech with respect to a specific target.",
"Therefore, we expand on these ideas by proposing a a hierarchical three-level annotation model that encompasses:",
"",
"",
"Using this annotation model, we create a new large publicly available dataset of English tweets. The key contributions of this paper are as follows:"
],
[
"Different abusive and offense language identification sub-tasks have been explored in the past few years including aggression identification, bullying detection, hate speech, toxic comments, and offensive language.",
"",
"Aggression identification: The TRAC shared task on Aggression Identification BIBREF2 provided participants with a dataset containing 15,000 annotated Facebook posts and comments in English and Hindi for training and validation. For testing, two different sets, one from Facebook and one from Twitter were provided. Systems were trained to discriminate between three classes: non-aggressive, covertly aggressive, and overtly aggressive.",
"",
"Bullying detection: Several studies have been published on bullying detection. One of them is the one by xu2012learning which apply sentiment analysis to detect bullying in tweets. xu2012learning use topic models to to identify relevant topics in bullying. Another related study is the one by dadvar2013improving which use user-related features such as the frequency of profanity in previous messages to improve bullying detection.",
"",
"Hate speech identification: It is perhaps the most widespread abusive language detection sub-task. There have been several studies published on this sub-task such as kwok2013locate and djuric2015hate who build a binary classifier to distinguish between `clean' comments and comments containing hate speech and profanity. More recently, Davidson et al. davidson2017automated presented the hate speech detection dataset containing over 24,000 English tweets labeled as non offensive, hate speech, and profanity.",
"",
"Offensive language: The GermEval BIBREF11 shared task focused on Offensive language identification in German tweets. A dataset of over 8,500 annotated tweets was provided for a course-grained binary classification task in which systems were trained to discriminate between offensive and non-offensive tweets and a second task where the organizers broke down the offensive class into three classes: profanity, insult, and abuse.",
"",
"Toxic comments: The Toxic Comment Classification Challenge was an open competition at Kaggle which provided participants with comments from Wikipedia labeled in six classes: toxic, severe toxic, obscene, threat, insult, identity hate.",
"",
"While each of these sub-tasks tackle a particular type of abuse or offense, they share similar properties and the hierarchical annotation model proposed in this paper aims to capture this. Considering that, for example, an insult targeted at an individual is commonly known as cyberbulling and that insults targeted at a group are known as hate speech, we pose that OLID's hierarchical annotation model makes it a useful resource for various offensive language identification sub-tasks."
],
[
"In the OLID dataset, we use a hierarchical annotation model split into three levels to distinguish between whether language is offensive or not (A), and type (B) and target (C) of the offensive language. Each level is described in more detail in the following subsections and examples are shown in Table TABREF10 ."
],
[
"Level A discriminates between offensive (OFF) and non-offensive (NOT) tweets.",
"",
"Not Offensive (NOT): Posts that do not contain offense or profanity;",
"Offensive (OFF): We label a post as offensive if it contains any form of non-acceptable language (profanity) or a targeted offense, which can be veiled or direct. This category includes insults, threats, and posts containing profane language or swear words."
],
[
"Level B categorizes the type of offense and two labels are used: targeted (TIN) and untargeted (INT) insults and threats.",
"Targeted Insult (TIN): Posts which contain an insult/threat to an individual, group, or others (see next layer);",
"Untargeted (UNT): Posts containing non-targeted profanity and swearing. Posts with general profanity are not targeted, but they contain non-acceptable language."
],
[
"Level C categorizes the targets of insults and threats as individual (IND), group (GRP), and other (OTH).",
"",
"Individual (IND): Posts targeting an individual. It can be a a famous person, a named individual or an unnamed participant in the conversation. Insults and threats targeted at individuals are often defined as cyberbulling.",
"Group (GRP): The target of these offensive posts is a group of people considered as a unity due to the same ethnicity, gender or sexual orientation, political affiliation, religious belief, or other common characteristic. Many of the insults and threats targeted at a group correspond to what is commonly understood as hate speech.",
"Other (OTH): The target of these offensive posts does not belong to any of the previous two categories (e.g. an organization, a situation, an event, or an issue)."
],
[
"The data included in OLID has been collected from Twitter. We retrieved the data using the Twitter API by searching for keywords and constructions that are often included in offensive messages, such as `she is' or `to:BreitBartNews'. We carried out a first round of trial annotation of 300 instances with six experts. The goal of the trial annotation was to 1) evaluate the proposed tagset; 2) evaluate the data retrieval method; and 3) create a gold standard with instances that could be used as test questions in the training and test setting annotation which was carried out using crowdsourcing. The breakdown of keywords and their offensive content in the trial data of 300 tweets is shown in Table TABREF14 . We included a left (@NewYorker) and far-right (@BreitBartNews) news accounts because there tends to be political offense in the comments. One of the best offensive keywords was tweets that were flagged as not being safe by the Twitter `safe' filter (the `-' indicates `not safe'). The vast majority of content on Twitter is not offensive so we tried different strategies to keep a reasonable number of tweets in the offensive class amounting to around 30% of the dataset including excluding some keywords that were not high in offensive content such as `they are` and `to:NewYorker`. Although `he is' is lower in offensive content we kept it as a keyword to avoid gender bias. In addition to the keywords in the trial set, we searched for more political keywords which tend to be higher in offensive content, and sampled our dataset such that 50% of the the tweets come from political keywords and 50% come from non-political keywords. In addition to the keywords `gun control', and `to:BreitbartNews', political keywords used to collect these tweets are `MAGA', `antifa', `conservative' and `liberal'. We computed Fliess' INLINEFORM0 on the trial set for the five annotators on 21 of the tweets. INLINEFORM1 is .83 for Layer A (OFF vs NOT) indicating high agreement. As to normalization and anonymization, no user metadata or Twitter IDs have been stored, and URLs and Twitter mentions have been substituted to placeholders. We follow prior work in related areas (burnap2015cyber,davidson2017automated) and annotate our data using crowdsourcing using the platform Figure Eight. We ensure data quality by: 1) we only received annotations from individuals who were experienced in the platform; and 2) we used test questions to discard annotations of individuals who did not reach a certain threshold. Each instance in the dataset was annotated by multiple annotators and inter-annotator agreement has been calculated. We first acquired two annotations for each instance. In case of 100% agreement, we considered these as acceptable annotations, and in case of disagreement, we requested more annotations until the agreement was above 66%. After the crowdsourcing annotation, we used expert adjudication to guarantee the quality of the annotation. The breakdown of the data into training and testing for the labels from each level is shown in Table TABREF15 ."
],
[
"We assess our dataset using traditional and deep learning methods. Our simplest model is a linear SVM trained on word unigrams. SVMs have produced state-of-the-art results for many text classification tasks BIBREF13 . We also train a bidirectional Long Short-Term-Memory (BiLSTM) model, which we adapted from the sentiment analysis system of sentimentSystem,rasooli2018cross and altered to predict offensive labels instead. It consists of (1) an input embedding layer, (2) a bidirectional LSTM layer, (3) an average pooling layer of input features. The concatenation of the LSTM's and average pool layer is passed through a dense layer and the output is passed through a softmax function. We set two input channels for the input embedding layers: pre-trained FastText embeddings BIBREF14 , as well as updatable embeddings learned by the model during training. Finally, we also apply a Convolutional Neural Network (CNN) model based on the architecture of BIBREF15 , using the same multi-channel inputs as the above BiLSTM.",
"Our models are trained on the training data, and evaluated by predicting the labels for the held-out test set. The distribution is described in Table TABREF15 . We evaluate and compare the models using the macro-averaged F1-score as the label distribution is highly imbalanced. Per-class Precision (P), Recall (R), and F1-score (F1), also with other averaged metrics are also reported. The models are compared against baselines of predicting all labels as the majority or minority classes."
],
[
"The performance on discriminating between offensive (OFF) and non-offensive (NOT) posts is reported in Table TABREF18 . We can see that all systems perform significantly better than chance, with the neural models being substantially better than the SVM. The CNN outperforms the RNN model, achieving a macro-F1 score of 0.80."
],
[
"In this experiment, the two systems were trained to discriminate between insults and threats (TIN) and untargeted (UNT) offenses, which generally refer to profanity. The results are shown in Table TABREF19 .",
"The CNN system achieved higher performance in this experiment compared to the BiLSTM, with a macro-F1 score of 0.69. All systems performed better at identifying target and threats (TIN) than untargeted offenses (UNT)."
],
[
"The results of the offensive target identification experiment are reported in Table TABREF20 . Here the systems were trained to distinguish between three targets: a group (GRP), an individual (IND), or others (OTH). All three models achieved similar results far surpassing the random baselines, with a slight performance edge for the neural models.",
"The performance of all systems for the OTH class is 0. This poor performances can be explained by two main factors. First, unlike the two other classes, OTH is a heterogeneous collection of targets. It includes offensive tweets targeted at organizations, situations, events, etc. making it more challenging for systems to learn discriminative properties of this class. Second, this class contains fewer training instances than the other two. There are only 395 instances in OTH, and 1,075 in GRP, and 2,407 in IND."
],
[
"This paper presents OLID, a new dataset with annotation of type and target of offensive language. OLID is the official dataset of the shared task SemEval 2019 Task 6: Identifying and Categorizing Offensive Language in Social Media (OffensEval) BIBREF16 . In OffensEval, each annotation level in OLID is an independent sub-task. The dataset contains 14,100 tweets and is released freely to the research community. To the best of our knowledge, this is the first dataset to contain annotation of type and target of offenses in social media and it opens several new avenues for research in this area. We present baseline experiments using SVMs and neural networks to identify the offensive tweets, discriminate between insults, threats, and profanity, and finally to identify the target of the offensive messages. The results show that this is a challenging task. A CNN-based sentence classifier achieved the best results in all three sub-tasks.",
"In future work, we would like to make a cross-corpus comparison of OLID and datasets annotated for similar tasks such as aggression identification BIBREF2 and hate speech detection BIBREF8 . This comparison is, however, far from trivial as the annotation of OLID is different."
],
[
"The research presented in this paper was partially supported by an ERAS fellowship awarded by the University of Wolverhampton."
]
],
"section_name": [
"Introduction",
"Related Work",
"Hierarchically Modelling Offensive Content",
"Level A: Offensive language Detection",
"Level B: Categorization of Offensive Language",
"Level C: Offensive Language Target Identification",
"Data Collection",
"Experiments and Evaluation",
"Offensive Language Detection",
"Categorization of Offensive Language",
"Offensive Language Target Identification",
"Conclusion and Future Work",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"710b79c1764a5c374181d22c40405e48e46c9475",
"7cb14177df857a2fc6924f9ee1db0da32bc85869",
"de250af74bdaa41eddbbf03be791a39fd51f2de6"
],
"answer": [
{
"evidence": [
"We assess our dataset using traditional and deep learning methods. Our simplest model is a linear SVM trained on word unigrams. SVMs have produced state-of-the-art results for many text classification tasks BIBREF13 . We also train a bidirectional Long Short-Term-Memory (BiLSTM) model, which we adapted from the sentiment analysis system of sentimentSystem,rasooli2018cross and altered to predict offensive labels instead. It consists of (1) an input embedding layer, (2) a bidirectional LSTM layer, (3) an average pooling layer of input features. The concatenation of the LSTM's and average pool layer is passed through a dense layer and the output is passed through a softmax function. We set two input channels for the input embedding layers: pre-trained FastText embeddings BIBREF14 , as well as updatable embeddings learned by the model during training. Finally, we also apply a Convolutional Neural Network (CNN) model based on the architecture of BIBREF15 , using the same multi-channel inputs as the above BiLSTM."
],
"extractive_spans": [
"linear SVM",
"bidirectional Long Short-Term-Memory (BiLSTM)",
"Convolutional Neural Network (CNN)"
],
"free_form_answer": "",
"highlighted_evidence": [
"We assess our dataset using traditional and deep learning methods. Our simplest model is a linear SVM trained on word unigrams. SVMs have produced state-of-the-art results for many text classification tasks BIBREF13 . We also train a bidirectional Long Short-Term-Memory (BiLSTM) model, which we adapted from the sentiment analysis system of sentimentSystem,rasooli2018cross and altered to predict offensive labels instead. It consists of (1) an input embedding layer, (2) a bidirectional LSTM layer, (3) an average pooling layer of input features. The concatenation of the LSTM's and average pool layer is passed through a dense layer and the output is passed through a softmax function. We set two input channels for the input embedding layers: pre-trained FastText embeddings BIBREF14 , as well as updatable embeddings learned by the model during training. Finally, we also apply a Convolutional Neural Network (CNN) model based on the architecture of BIBREF15 , using the same multi-channel inputs as the above BiLSTM."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We assess our dataset using traditional and deep learning methods. Our simplest model is a linear SVM trained on word unigrams. SVMs have produced state-of-the-art results for many text classification tasks BIBREF13 . We also train a bidirectional Long Short-Term-Memory (BiLSTM) model, which we adapted from the sentiment analysis system of sentimentSystem,rasooli2018cross and altered to predict offensive labels instead. It consists of (1) an input embedding layer, (2) a bidirectional LSTM layer, (3) an average pooling layer of input features. The concatenation of the LSTM's and average pool layer is passed through a dense layer and the output is passed through a softmax function. We set two input channels for the input embedding layers: pre-trained FastText embeddings BIBREF14 , as well as updatable embeddings learned by the model during training. Finally, we also apply a Convolutional Neural Network (CNN) model based on the architecture of BIBREF15 , using the same multi-channel inputs as the above BiLSTM."
],
"extractive_spans": [
"linear SVM",
"bidirectional Long Short-Term-Memory (BiLSTM)",
"Convolutional Neural Network (CNN)"
],
"free_form_answer": "",
"highlighted_evidence": [
"Our simplest model is a linear SVM trained on word unigrams. SVMs have produced state-of-the-art results for many text classification tasks BIBREF13 . We also train a bidirectional Long Short-Term-Memory (BiLSTM) model, which we adapted from the sentiment analysis system of sentimentSystem,rasooli2018cross and altered to predict offensive labels instead.",
"Finally, we also apply a Convolutional Neural Network (CNN) model based on the architecture of BIBREF15 , using the same multi-channel inputs as the above BiLSTM."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We assess our dataset using traditional and deep learning methods. Our simplest model is a linear SVM trained on word unigrams. SVMs have produced state-of-the-art results for many text classification tasks BIBREF13 . We also train a bidirectional Long Short-Term-Memory (BiLSTM) model, which we adapted from the sentiment analysis system of sentimentSystem,rasooli2018cross and altered to predict offensive labels instead. It consists of (1) an input embedding layer, (2) a bidirectional LSTM layer, (3) an average pooling layer of input features. The concatenation of the LSTM's and average pool layer is passed through a dense layer and the output is passed through a softmax function. We set two input channels for the input embedding layers: pre-trained FastText embeddings BIBREF14 , as well as updatable embeddings learned by the model during training. Finally, we also apply a Convolutional Neural Network (CNN) model based on the architecture of BIBREF15 , using the same multi-channel inputs as the above BiLSTM."
],
"extractive_spans": [
"linear SVM trained on word unigrams",
" bidirectional Long Short-Term-Memory (BiLSTM)",
" Convolutional Neural Network (CNN) "
],
"free_form_answer": "",
"highlighted_evidence": [
"We assess our dataset using traditional and deep learning methods. Our simplest model is a linear SVM trained on word unigrams. SVMs have produced state-of-the-art results for many text classification tasks BIBREF13 . We also train a bidirectional Long Short-Term-Memory (BiLSTM) model, which we adapted from the sentiment analysis system of sentimentSystem,rasooli2018cross and altered to predict offensive labels instead. It consists of (1) an input embedding layer, (2) a bidirectional LSTM layer, (3) an average pooling layer of input features. The concatenation of the LSTM's and average pool layer is passed through a dense layer and the output is passed through a softmax function. We set two input channels for the input embedding layers: pre-trained FastText embeddings BIBREF14 , as well as updatable embeddings learned by the model during training. Finally, we also apply a Convolutional Neural Network (CNN) model based on the architecture of BIBREF15 , using the same multi-channel inputs as the above BiLSTM."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"985b8f9bb01b53a734f42b520af29870f4e077b0"
],
"answer": [
{
"evidence": [
"Recently, Waseem et. al. ( BIBREF12 ) acknowledged the similarities among prior work and discussed the need for a typology that differentiates between whether the (abusive) language is directed towards a specific individual or entity or towards a generalized group and whether the abusive content is explicit or implicit. Wiegand et al. ( BIBREF11 ) followed this trend as well on German tweets. In their evaluation, they have a task to detect offensive vs not offensive tweets and a second task for distinguishing between the offensive tweets as profanity, insult, or abuse. However, no prior work has explored the target of the offensive language, which is important in many scenarios, e.g., when studying hate speech with respect to a specific target."
],
"extractive_spans": [
"no prior work has explored the target of the offensive language"
],
"free_form_answer": "",
"highlighted_evidence": [
"However, no prior work has explored the target of the offensive language, which is important in many scenarios, e.g., when studying hate speech with respect to a specific target."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"2b209806dc5b32658f12ec18c449b1cc667a85e7",
"61eaf62bda7a05c65a5f7469673929c38dab40c9",
"956b5562b42da93ca0508613501dccb4d3f9b5b6"
],
"answer": [
{
"evidence": [
"Using this annotation model, we create a new large publicly available dataset of English tweets. The key contributions of this paper are as follows:"
],
"extractive_spans": [
"English"
],
"free_form_answer": "",
"highlighted_evidence": [
"Using this annotation model, we create a new large publicly available dataset of English tweets."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Using this annotation model, we create a new large publicly available dataset of English tweets. The key contributions of this paper are as follows:"
],
"extractive_spans": [
"English "
],
"free_form_answer": "",
"highlighted_evidence": [
"Using this annotation model, we create a new large publicly available dataset of English tweets. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Using this annotation model, we create a new large publicly available dataset of English tweets. The key contributions of this paper are as follows:"
],
"extractive_spans": [
"English"
],
"free_form_answer": "",
"highlighted_evidence": [
"Using this annotation model, we create a new large publicly available dataset of English tweets. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"06bef9f965e0df0274090562f8345d3d9180a544",
"b250fd2019421911362cc78e7fe0696615ed0284",
"fb2149e3fa1fc8db4bb7dd2c8d79ac81ca9ea1c4"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 3: Distribution of label combinations in OLID."
],
"extractive_spans": [],
"free_form_answer": "14,100 tweets",
"highlighted_evidence": [
"FLOAT SELECTED: Table 3: Distribution of label combinations in OLID."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 3: Distribution of label combinations in OLID.",
"The data included in OLID has been collected from Twitter. We retrieved the data using the Twitter API by searching for keywords and constructions that are often included in offensive messages, such as `she is' or `to:BreitBartNews'. We carried out a first round of trial annotation of 300 instances with six experts. The goal of the trial annotation was to 1) evaluate the proposed tagset; 2) evaluate the data retrieval method; and 3) create a gold standard with instances that could be used as test questions in the training and test setting annotation which was carried out using crowdsourcing. The breakdown of keywords and their offensive content in the trial data of 300 tweets is shown in Table TABREF14 . We included a left (@NewYorker) and far-right (@BreitBartNews) news accounts because there tends to be political offense in the comments. One of the best offensive keywords was tweets that were flagged as not being safe by the Twitter `safe' filter (the `-' indicates `not safe'). The vast majority of content on Twitter is not offensive so we tried different strategies to keep a reasonable number of tweets in the offensive class amounting to around 30% of the dataset including excluding some keywords that were not high in offensive content such as `they are` and `to:NewYorker`. Although `he is' is lower in offensive content we kept it as a keyword to avoid gender bias. In addition to the keywords in the trial set, we searched for more political keywords which tend to be higher in offensive content, and sampled our dataset such that 50% of the the tweets come from political keywords and 50% come from non-political keywords. In addition to the keywords `gun control', and `to:BreitbartNews', political keywords used to collect these tweets are `MAGA', `antifa', `conservative' and `liberal'. We computed Fliess' INLINEFORM0 on the trial set for the five annotators on 21 of the tweets. INLINEFORM1 is .83 for Layer A (OFF vs NOT) indicating high agreement. As to normalization and anonymization, no user metadata or Twitter IDs have been stored, and URLs and Twitter mentions have been substituted to placeholders. We follow prior work in related areas (burnap2015cyber,davidson2017automated) and annotate our data using crowdsourcing using the platform Figure Eight. We ensure data quality by: 1) we only received annotations from individuals who were experienced in the platform; and 2) we used test questions to discard annotations of individuals who did not reach a certain threshold. Each instance in the dataset was annotated by multiple annotators and inter-annotator agreement has been calculated. We first acquired two annotations for each instance. In case of 100% agreement, we considered these as acceptable annotations, and in case of disagreement, we requested more annotations until the agreement was above 66%. After the crowdsourcing annotation, we used expert adjudication to guarantee the quality of the annotation. The breakdown of the data into training and testing for the labels from each level is shown in Table TABREF15 ."
],
"extractive_spans": [],
"free_form_answer": "Dataset contains total of 14100 annotations.",
"highlighted_evidence": [
"FLOAT SELECTED: Table 3: Distribution of label combinations in OLID.",
"The breakdown of the data into training and testing for the labels from each level is shown in Table TABREF15 ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"0ce9451170a61260e3458c996de322c4d6caf1f6",
"f52f80b4b459acf13024e484953521ce927c3b24",
"f6524c3b1b2294e6e8dffdb005192dad12b30dd8"
],
"answer": [
{
"evidence": [
"Level B categorizes the type of offense and two labels are used: targeted (TIN) and untargeted (INT) insults and threats.",
"Targeted Insult (TIN): Posts which contain an insult/threat to an individual, group, or others (see next layer);",
"Untargeted (UNT): Posts containing non-targeted profanity and swearing. Posts with general profanity are not targeted, but they contain non-acceptable language.",
"Level C categorizes the targets of insults and threats as individual (IND), group (GRP), and other (OTH).",
"Individual (IND): Posts targeting an individual. It can be a a famous person, a named individual or an unnamed participant in the conversation. Insults and threats targeted at individuals are often defined as cyberbulling.",
"Group (GRP): The target of these offensive posts is a group of people considered as a unity due to the same ethnicity, gender or sexual orientation, political affiliation, religious belief, or other common characteristic. Many of the insults and threats targeted at a group correspond to what is commonly understood as hate speech.",
"Other (OTH): The target of these offensive posts does not belong to any of the previous two categories (e.g. an organization, a situation, an event, or an issue)."
],
"extractive_spans": [],
"free_form_answer": "non-targeted profanity and swearing, targeted insults such as cyberbullying, offensive content related to ethnicity, gender or sexual orientation, political affiliation, religious belief, and anything belonging to hate speech",
"highlighted_evidence": [
"Level B categorizes the type of offense and two labels are used: targeted (TIN) and untargeted (INT) insults and threats.\n\nTargeted Insult (TIN): Posts which contain an insult/threat to an individual, group, or others (see next layer);\n\nUntargeted (UNT): Posts containing non-targeted profanity and swearing. Posts with general profanity are not targeted, but they contain non-acceptable language.",
"Level C categorizes the targets of insults and threats as individual (IND), group (GRP), and other (OTH).\n\nIndividual (IND): Posts targeting an individual. It can be a a famous person, a named individual or an unnamed participant in the conversation. Insults and threats targeted at individuals are often defined as cyberbulling.\n\nGroup (GRP): The target of these offensive posts is a group of people considered as a unity due to the same ethnicity, gender or sexual orientation, political affiliation, religious belief, or other common characteristic. Many of the insults and threats targeted at a group correspond to what is commonly understood as hate speech.\n\nOther (OTH): The target of these offensive posts does not belong to any of the previous two categories (e.g. an organization, a situation, an event, or an issue)."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Level B categorizes the type of offense and two labels are used: targeted (TIN) and untargeted (INT) insults and threats.",
"Targeted Insult (TIN): Posts which contain an insult/threat to an individual, group, or others (see next layer);",
"Untargeted (UNT): Posts containing non-targeted profanity and swearing. Posts with general profanity are not targeted, but they contain non-acceptable language."
],
"extractive_spans": [
"Targeted Insult (TIN): Posts which contain an insult/threat to an individual, group, or others ",
"Untargeted (UNT): Posts containing non-targeted profanity and swearing."
],
"free_form_answer": "",
"highlighted_evidence": [
"Level B categorizes the type of offense and two labels are used: targeted (TIN) and untargeted (INT) insults and threats.\n\nTargeted Insult (TIN): Posts which contain an insult/threat to an individual, group, or others (see next layer);\n\nUntargeted (UNT): Posts containing non-targeted profanity and swearing. Posts with general profanity are not targeted, but they contain non-acceptable language."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In the OLID dataset, we use a hierarchical annotation model split into three levels to distinguish between whether language is offensive or not (A), and type (B) and target (C) of the offensive language. Each level is described in more detail in the following subsections and examples are shown in Table TABREF10 .",
"Level A discriminates between offensive (OFF) and non-offensive (NOT) tweets.",
"Level B categorizes the type of offense and two labels are used: targeted (TIN) and untargeted (INT) insults and threats.",
"Level C categorizes the targets of insults and threats as individual (IND), group (GRP), and other (OTH)."
],
"extractive_spans": [
"offensive (OFF) and non-offensive (NOT)",
"targeted (TIN) and untargeted (INT) insults",
"targets of insults and threats as individual (IND), group (GRP), and other (OTH)"
],
"free_form_answer": "",
"highlighted_evidence": [
"In the OLID dataset, we use a hierarchical annotation model split into three levels to distinguish between whether language is offensive or not (A), and type (B) and target (C) of the offensive language.",
"Level A discriminates between offensive (OFF) and non-offensive (NOT) tweets.",
"Level B categorizes the type of offense and two labels are used: targeted (TIN) and untargeted (INT) insults and threats.",
"Level C categorizes the targets of insults and threats as individual (IND), group (GRP), and other (OTH)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"cba16ebea4215ad9e9fe5250a3ba2d46f265f40d"
],
"answer": [
{
"evidence": [
"The performance on discriminating between offensive (OFF) and non-offensive (NOT) posts is reported in Table TABREF18 . We can see that all systems perform significantly better than chance, with the neural models being substantially better than the SVM. The CNN outperforms the RNN model, achieving a macro-F1 score of 0.80.",
"The CNN system achieved higher performance in this experiment compared to the BiLSTM, with a macro-F1 score of 0.69. All systems performed better at identifying target and threats (TIN) than untargeted offenses (UNT)."
],
"extractive_spans": [
"CNN "
],
"free_form_answer": "",
"highlighted_evidence": [
"The performance on discriminating between offensive (OFF) and non-offensive (NOT) posts is reported in Table TABREF18 . We can see that all systems perform significantly better than chance, with the neural models being substantially better than the SVM. The CNN outperforms the RNN model, achieving a macro-F1 score of 0.80.",
"The CNN system achieved higher performance in this experiment compared to the BiLSTM, with a macro-F1 score of 0.69. All systems performed better at identifying target and threats (TIN) than untargeted offenses (UNT)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"4001453e026a63ebbce59e23f86431dad1a12d37"
],
"answer": [
{
"evidence": [
"The data included in OLID has been collected from Twitter. We retrieved the data using the Twitter API by searching for keywords and constructions that are often included in offensive messages, such as `she is' or `to:BreitBartNews'. We carried out a first round of trial annotation of 300 instances with six experts. The goal of the trial annotation was to 1) evaluate the proposed tagset; 2) evaluate the data retrieval method; and 3) create a gold standard with instances that could be used as test questions in the training and test setting annotation which was carried out using crowdsourcing. The breakdown of keywords and their offensive content in the trial data of 300 tweets is shown in Table TABREF14 . We included a left (@NewYorker) and far-right (@BreitBartNews) news accounts because there tends to be political offense in the comments. One of the best offensive keywords was tweets that were flagged as not being safe by the Twitter `safe' filter (the `-' indicates `not safe'). The vast majority of content on Twitter is not offensive so we tried different strategies to keep a reasonable number of tweets in the offensive class amounting to around 30% of the dataset including excluding some keywords that were not high in offensive content such as `they are` and `to:NewYorker`. Although `he is' is lower in offensive content we kept it as a keyword to avoid gender bias. In addition to the keywords in the trial set, we searched for more political keywords which tend to be higher in offensive content, and sampled our dataset such that 50% of the the tweets come from political keywords and 50% come from non-political keywords. In addition to the keywords `gun control', and `to:BreitbartNews', political keywords used to collect these tweets are `MAGA', `antifa', `conservative' and `liberal'. We computed Fliess' INLINEFORM0 on the trial set for the five annotators on 21 of the tweets. INLINEFORM1 is .83 for Layer A (OFF vs NOT) indicating high agreement. As to normalization and anonymization, no user metadata or Twitter IDs have been stored, and URLs and Twitter mentions have been substituted to placeholders. We follow prior work in related areas (burnap2015cyber,davidson2017automated) and annotate our data using crowdsourcing using the platform Figure Eight. We ensure data quality by: 1) we only received annotations from individuals who were experienced in the platform; and 2) we used test questions to discard annotations of individuals who did not reach a certain threshold. Each instance in the dataset was annotated by multiple annotators and inter-annotator agreement has been calculated. We first acquired two annotations for each instance. In case of 100% agreement, we considered these as acceptable annotations, and in case of disagreement, we requested more annotations until the agreement was above 66%. After the crowdsourcing annotation, we used expert adjudication to guarantee the quality of the annotation. The breakdown of the data into training and testing for the labels from each level is shown in Table TABREF15 ."
],
"extractive_spans": [
"five annotators"
],
"free_form_answer": "",
"highlighted_evidence": [
"We computed Fliess' INLINEFORM0 on the trial set for the five annotators on 21 of the tweets. INLINEFORM1 is .83 for Layer A (OFF vs NOT) indicating high agreement. As to normalization and anonymization, no user metadata or Twitter IDs have been stored, and URLs and Twitter mentions have been substituted to placeholders. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"8caf66b3081c2546f5b6b4549641f10ef2f62d77"
],
"answer": [
{
"evidence": [
"Offensive content has become pervasive in social media and a reason of concern for government organizations, online communities, and social media platforms. One of the most common strategies to tackle the problem is to train systems capable of recognizing offensive content, which then can be deleted or set aside for human moderation. In the last few years, there have been several studies published on the application of computational methods to deal with this problem. Most prior work focuses on a different aspect of offensive language such as abusive language BIBREF0 , BIBREF1 , (cyber-)aggression BIBREF2 , (cyber-)bullying BIBREF3 , BIBREF4 , toxic comments INLINEFORM0 , hate speech BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , and offensive language BIBREF11 . Prior work has focused on these aspects of offensive language in Twitter BIBREF3 , BIBREF7 , BIBREF8 , BIBREF11 , Wikipedia comments, and Facebook posts BIBREF2 ."
],
"extractive_spans": [
" Most prior work focuses on a different aspect of offensive language such as abusive language BIBREF0 , BIBREF1 , (cyber-)aggression BIBREF2 , (cyber-)bullying BIBREF3 , BIBREF4 , toxic comments INLINEFORM0 , hate speech BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , and offensive language BIBREF11 . Prior work has focused on these aspects of offensive language in Twitter BIBREF3 , BIBREF7 , BIBREF8 , BIBREF11 , Wikipedia comments, and Facebook posts BIBREF2 ."
],
"free_form_answer": "",
"highlighted_evidence": [
"Most prior work focuses on a different aspect of offensive language such as abusive language BIBREF0 , BIBREF1 , (cyber-)aggression BIBREF2 , (cyber-)bullying BIBREF3 , BIBREF4 , toxic comments INLINEFORM0 , hate speech BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , and offensive language BIBREF11 . Prior work has focused on these aspects of offensive language in Twitter BIBREF3 , BIBREF7 , BIBREF8 , BIBREF11 , Wikipedia comments, and Facebook posts BIBREF2 ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"022a7c6e964a11e13c4a1c1a4d10472bff21e480"
],
"answer": [
{
"evidence": [
"In the OLID dataset, we use a hierarchical annotation model split into three levels to distinguish between whether language is offensive or not (A), and type (B) and target (C) of the offensive language. Each level is described in more detail in the following subsections and examples are shown in Table TABREF10 .",
"Level A: Offensive language Detection",
"Level A discriminates between offensive (OFF) and non-offensive (NOT) tweets.",
"Level B: Categorization of Offensive Language",
"Level B categorizes the type of offense and two labels are used: targeted (TIN) and untargeted (INT) insults and threats.",
"Level C: Offensive Language Target Identification",
"Level C categorizes the targets of insults and threats as individual (IND), group (GRP), and other (OTH)."
],
"extractive_spans": [
"Level A: Offensive language Detection\n",
"Level B: Categorization of Offensive Language\n",
"Level C: Offensive Language Target Identification\n"
],
"free_form_answer": "",
"highlighted_evidence": [
"n the OLID dataset, we use a hierarchical annotation model split into three levels to distinguish between whether language is offensive or not (A), and type (B) and target (C) of the offensive language. Each level is described in more detail in the following subsections and examples are shown in Table TABREF10 .",
"Level A: Offensive language Detection\nLevel A discriminates between offensive (OFF) and non-offensive (NOT) tweets.",
"Level B: Categorization of Offensive Language\nLevel B categorizes the type of offense and two labels are used: targeted (TIN) and untargeted (INT) insults and threats.",
"Level C: Offensive Language Target Identification\nLevel C categorizes the targets of insults and threats as individual (IND), group (GRP), and other (OTH)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"1ef414648e053b756ab33357e5172602e9eccaae"
],
"answer": [
{
"evidence": [
"The data included in OLID has been collected from Twitter. We retrieved the data using the Twitter API by searching for keywords and constructions that are often included in offensive messages, such as `she is' or `to:BreitBartNews'. We carried out a first round of trial annotation of 300 instances with six experts. The goal of the trial annotation was to 1) evaluate the proposed tagset; 2) evaluate the data retrieval method; and 3) create a gold standard with instances that could be used as test questions in the training and test setting annotation which was carried out using crowdsourcing. The breakdown of keywords and their offensive content in the trial data of 300 tweets is shown in Table TABREF14 . We included a left (@NewYorker) and far-right (@BreitBartNews) news accounts because there tends to be political offense in the comments. One of the best offensive keywords was tweets that were flagged as not being safe by the Twitter `safe' filter (the `-' indicates `not safe'). The vast majority of content on Twitter is not offensive so we tried different strategies to keep a reasonable number of tweets in the offensive class amounting to around 30% of the dataset including excluding some keywords that were not high in offensive content such as `they are` and `to:NewYorker`. Although `he is' is lower in offensive content we kept it as a keyword to avoid gender bias. In addition to the keywords in the trial set, we searched for more political keywords which tend to be higher in offensive content, and sampled our dataset such that 50% of the the tweets come from political keywords and 50% come from non-political keywords. In addition to the keywords `gun control', and `to:BreitbartNews', political keywords used to collect these tweets are `MAGA', `antifa', `conservative' and `liberal'. We computed Fliess' INLINEFORM0 on the trial set for the five annotators on 21 of the tweets. INLINEFORM1 is .83 for Layer A (OFF vs NOT) indicating high agreement. As to normalization and anonymization, no user metadata or Twitter IDs have been stored, and URLs and Twitter mentions have been substituted to placeholders. We follow prior work in related areas (burnap2015cyber,davidson2017automated) and annotate our data using crowdsourcing using the platform Figure Eight. We ensure data quality by: 1) we only received annotations from individuals who were experienced in the platform; and 2) we used test questions to discard annotations of individuals who did not reach a certain threshold. Each instance in the dataset was annotated by multiple annotators and inter-annotator agreement has been calculated. We first acquired two annotations for each instance. In case of 100% agreement, we considered these as acceptable annotations, and in case of disagreement, we requested more annotations until the agreement was above 66%. After the crowdsourcing annotation, we used expert adjudication to guarantee the quality of the annotation. The breakdown of the data into training and testing for the labels from each level is shown in Table TABREF15 .",
"FLOAT SELECTED: Table 3: Distribution of label combinations in OLID."
],
"extractive_spans": [],
"free_form_answer": "Level A: 14100 Tweets\nLevel B: 4640 Tweets\nLevel C: 4089 Tweets",
"highlighted_evidence": [
" The breakdown of the data into training and testing for the labels from each level is shown in Table TABREF15 .",
"FLOAT SELECTED: Table 3: Distribution of label combinations in OLID."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f"
]
}
],
"nlp_background": [
"",
"",
"",
"",
"",
"five",
"five",
"five",
"five",
"five"
],
"paper_read": [
"",
"",
"",
"",
"",
"yes",
"yes",
"yes",
"yes",
"yes"
],
"question": [
"What models are used in the experiment?",
"What are the differences between this dataset and pre-existing ones?",
"In what language are the tweets?",
"What is the size of the new dataset?",
"What kinds of offensive content are explored?",
"What is the best performing model?",
"How many annotators participated?",
"What is the definition of offensive language?",
"What are the three layers of the annotation scheme?",
"How long is the dataset for each step of hierarchy?"
],
"question_id": [
"8c852fc29bda014d28c3ee5b5a7e449ab9152d35",
"682e26262abba473412f68cbeb5f69aa3b9968d7",
"5daeb8d4d6f3b8543ec6309a7a35523e160437eb",
"74fb77a624ea9f1821f58935a52cca3086bb0981",
"d015faf0f8dcf2e15c1690bbbe2bf1e7e0ce3751",
"55bd59076a49b19d3283af41c5e3ccb875f3eb0c",
"521280a87c43fcdf9f577da235e7072a23f0673e",
"5a8cc8f80509ea77d8213ed28c5ead501c68c725",
"290ee79b5e3872e0496a6a0fc9b103ab7d8f6c30",
"1b72aa2ec3ce02131e60626639f0cf2056ec23ca"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
],
"search_query": [
"",
"",
"",
"",
"",
"Offensive language",
"Offensive language",
"Offensive language",
"Offensive language",
"Offensive language"
],
"topic_background": [
"",
"",
"",
"",
"",
"research",
"research",
"research",
"research",
"research"
]
} | {
"caption": [
"Table 1: Four tweets from the dataset, with their labels for each level of the annotation schema.",
"Table 2: The keywords from the full dataset (except for the first three rows) and the percentage of offensive tweets for each keyword.",
"Table 3: Distribution of label combinations in OLID.",
"Table 4: Results for offensive language detection (Level A). We report Precision (P), Recall (R), and F1 for each model/baseline on all classes (NOT, OFF), and weighted averages. Macro-F1 is also listed (best in bold).",
"Table 5: Results for offensive language categorization (level B). We report Precision (P), Recall (R), and F1 for each model/baseline on all classes (TIN, UNT), and weighted averages. Macro-F1 is also listed (best in bold).",
"Table 6: Results for offense target identification (level C). We report Precision (P), Recall (R), and F1 for each model/baseline on all classes (GRP, IND, OTH), and weighted averages. Macro-F1 is also listed (best in bold)."
],
"file": [
"2-Table1-1.png",
"3-Table2-1.png",
"4-Table3-1.png",
"5-Table4-1.png",
"5-Table5-1.png",
"5-Table6-1.png"
]
} | [
"What is the size of the new dataset?",
"What kinds of offensive content are explored?",
"How long is the dataset for each step of hierarchy?"
] | [
[
"1902.09666-4-Table3-1.png",
"1902.09666-Data Collection-0"
],
[
"1902.09666-Level C: Offensive Language Target Identification-4",
"1902.09666-Level C: Offensive Language Target Identification-2",
"1902.09666-Level C: Offensive Language Target Identification-0",
"1902.09666-Level A: Offensive language Detection-0",
"1902.09666-Level B: Categorization of Offensive Language-2",
"1902.09666-Level B: Categorization of Offensive Language-1",
"1902.09666-Level B: Categorization of Offensive Language-0",
"1902.09666-Level C: Offensive Language Target Identification-3",
"1902.09666-Hierarchically Modelling Offensive Content-0"
],
[
"1902.09666-4-Table3-1.png",
"1902.09666-Data Collection-0"
]
] | [
"Dataset contains total of 14100 annotations.",
"non-targeted profanity and swearing, targeted insults such as cyberbullying, offensive content related to ethnicity, gender or sexual orientation, political affiliation, religious belief, and anything belonging to hate speech",
"Level A: 14100 Tweets\nLevel B: 4640 Tweets\nLevel C: 4089 Tweets"
] | 126 |
1604.00400 | Revisiting Summarization Evaluation for Scientific Articles | Evaluation of text summarization approaches have been mostly based on metrics that measure similarities of system generated summaries with a set of human written gold-standard summaries. The most widely used metric in summarization evaluation has been the ROUGE family. ROUGE solely relies on lexical overlaps between the terms and phrases in the sentences; therefore, in cases of terminology variations and paraphrasing, ROUGE is not as effective. Scientific article summarization is one such case that is different from general domain summarization (e.g. newswire data). We provide an extensive analysis of ROUGE's effectiveness as an evaluation metric for scientific summarization; we show that, contrary to the common belief, ROUGE is not much reliable in evaluating scientific summaries. We furthermore show how different variants of ROUGE result in very different correlations with the manual Pyramid scores. Finally, we propose an alternative metric for summarization evaluation which is based on the content relevance between a system generated summary and the corresponding human written summaries. We call our metric SERA (Summarization Evaluation by Relevance Analysis). Unlike ROUGE, SERA consistently achieves high correlations with manual scores which shows its effectiveness in evaluation of scientific article summarization. | {
"paragraphs": [
[
"Automatic text summarization has been an active research area in natural language processing for several decades. To compare and evaluate the performance of different summarization systems, the most intuitive approach is assessing the quality of the summaries by human evaluators. However, manual evaluation is expensive and the obtained results are subjective and difficult to reproduce BIBREF0 . To address these problems, automatic evaluation measures for summarization have been proposed. Rouge BIBREF1 is one of the first and most widely used metrics in summarization evaluation. It facilitates evaluation of system generated summaries by comparing them to a set of human written gold-standard summaries. It is inspired by the success of a similar metric Bleu BIBREF2 which is being used in Machine Translation (MT) evaluation. The main success of Rouge is due to its high correlation with human assessment scores on standard benchmarks BIBREF1 . Rouge has been used as one of the main evaluation metrics in later summarization benchmarks such as TAC[1] BIBREF3 .",
"[1]Text Analysis Conference (TAC) is a series of workshops for evaluating research in Natural Language Processing",
"Since the establishment of Rouge, almost all research in text summarization have used this metric as the main means for evaluating the quality of the proposed approaches. The public availability of Rouge as a toolkit for summarization evaluation has contributed to its wide usage. While Rouge has originally shown good correlations with human assessments, the study of its effectiveness was only limited to a few benchmarks on news summarization data (DUC[2] 2001-2003 benchmarks). Since 2003, summarization has grown to much further domains and genres such as scientific documents, social media and question answering. While there is not enough compelling evidence about the effectiveness of Rouge on these other summarization tasks, published research is almost always evaluated by Rouge. In addition, Rouge has a large number of possible variants and the published research often (arbitrarily) reports only a few of these variants.",
"[2]Document Understanding Conference (DUC) was one of NIST workshops that provided infrastructure for evaluation of text summarization methodologies (http://duc.nist.gov/).",
"By definition, Rouge solely relies on lexical overlaps (such as n-gram and sequence overlaps) between the system generated and human written gold-standard summaries. Higher lexical overlaps between the two show that the system generated summary is of higher quality. Therefore, in cases of terminology nuances and paraphrasing, Rouge is not accurate in estimating the quality of the summary.",
"We study the effectiveness of Rouge for evaluating scientific summarization. Scientific summarization targets much more technical and focused domains in which the goal is providing summaries for scientific articles. Scientific articles are much different than news articles in elements such as length, complexity and structure. Thus, effective summarization approaches usually have much higher compression rate, terminology variations and paraphrasing BIBREF4 .",
"Scientific summarization has attracted more attention recently (examples include works by abu2011coherent, qazvinian2013generating, and cohan2015scientific). Thus, it is important to study the validity of existing methodologies applied to the evaluation of news article summarization for this task. In particular, we raise the important question of how effective is Rouge, as an evaluation metric for scientific summarization? We answer this question by comparing Rouge scores with semi-manual evaluation score (Pyramid) in TAC 2014 scientific summarization dataset[1]. Results reveal that, contrary to the common belief, correlations between Rouge and the Pyramid scores are weak, which challenges its effectiveness for scientific summarization. Furthermore, we show a large variance of correlations between different Rouge variants and the manual evaluations which further makes the reliability of Rouge for evaluating scientific summaries less clear. We then propose an evaluation metric based on relevance analysis of summaries which aims to overcome the limitation of high lexical dependence in Rouge. We call our metric Sera (Summarization Evaluation by Relevance Analysis). Results show that the proposed metric achieves higher and more consistent correlations with semi-manual assessment scores.",
"[1]http://www.nist.gov/tac/2014/BiomedSumm/",
"Our contributions are as follows:",
"[2]The annotations can be accessed via the following repository: https://github.com/acohan/TAC-pyramid-Annotations/"
],
[
"Rouge has been the most widely used family of metrics in summarization evaluation. In the following, we briefly describe the different variants of Rouge:",
"Rouge-L, Rouge-W, Rouge-S and Rouge-SU were later extended to consider both the recall and precision. In calculating Rouge, stopword removal or stemming can also be considered, resulting in more variants.",
"In the summarization literature, despite the large number of variants of Rouge, only one or very few of these variants are often chosen (arbitrarily) for evaluation of the quality of the summarization approaches. When Rouge was proposed, the original variants were only recall-oriented and hence the reported correlation results BIBREF1 . The later extension of Rouge family by precision were only reflected in the later versions of the Rouge toolkit and additional evaluation of its effectiveness was not reported. Nevertheless, later published work in summarization adopted this toolkit for its ready implementation and relatively efficient performance.",
"The original Rouge metrics show high correlations with human judgments of the quality of summaries on the DUC 2001-2003 benchmarks. However, these benchmarks consist of newswire data and are intrinsically very different than other summarization tasks such as summarization of scientific papers. We argue that Rouge is not the best metric for all summarization tasks and we propose an alternative metric for evaluation of scientific summarization. The proposed alternative metric shows much higher and more consistent correlations with manual judgments in comparison with the well-established Rouge."
],
[
"Rouge functions based on the assumption that in order for a summary to be of high quality, it has to share many words or phrases with a human gold summary. However, different terminology may be used to refer to the same concepts and thus relying only on lexical overlaps may underrate content quality scores. To overcome this problem, we propose an approach based on the premise that concepts take meanings from the context they are in, and that related concepts co-occur frequently.",
"Our proposed metric is based on analysis of the content relevance between a system generated summary and the corresponding human written gold-standard summaries. On high level, we indirectly evaluate the content relevance between the candidate summary and the human summary using information retrieval. To accomplish this, we use the summaries as search queries and compare the overlaps of the retrieved results. Larger number of overlaps, suggest that the candidate summary has higher content quality with respect to the gold-standard. This method, enables us to also reward for terms that are not lexically equivalent but semantically related. Our method is based on the well established linguistic premise that semantically related words occur in similar contexts BIBREF5 . The context of the words can be considered as surrounding words, sentences in which they appear or the documents. For scientific summarization, we consider the context of the words as the scientific articles in which they appear. Thus, if two concepts appear in identical set of articles, they are semantically related. We consider the two summaries as similar if they refer to same set of articles even if the two summaries do not have high lexical overlaps. To capture if a summary relates to a article, we use information retrieval by considering the summaries as queries and the articles as documents and we rank the articles based on their relatedness to a given summary. For a given pair of system summary and the gold summary, similar rankings of the retrieved articles suggest that the summaries are semantically related, and thus the system summary is of higher quality.",
"Based on the domain of interest, we first construct an index from a set of articles in the same domain. Since TAC 2014 was focused on summarization in the biomedical domain, our index also comprises of biomedical articles. Given a candidate summary INLINEFORM0 and a set of gold summaries INLINEFORM1 ( INLINEFORM2 ; INLINEFORM3 is the total number of human summaries), we submit the candidate summary and gold summaries to the search engine as queries and compare their ranked results. Let INLINEFORM4 be the entire index which comprises of INLINEFORM5 total documents.",
"Let INLINEFORM0 be the ranked list of retrieved documents for candidate summary INLINEFORM1 , and INLINEFORM2 the ranked list of results for the gold summary INLINEFORM3 . These lists of results are based on a rank cut-off point INLINEFORM4 that is a parameter of the system. We provide evaluation results on different choices of cut-off point INLINEFORM5 in the Section SECREF5 We consider the following two scores: (i) simple intersection and (ii) discounted intersection by rankings. The simple intersection just considers the overlaps of the results in the two ranked lists and ignores the rankings. The discounted ranked scores, on the other hand, penalizes ranking differences between the two result sets. As an example consider the following list of retrieved documents (denoted by INLINEFORM6 s) for a candidate and a gold summary as queries:",
"Results for candidate summary: INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , INLINEFORM3 ",
"Results for gold summary: INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , INLINEFORM3 ",
"These two sets of results consist of identical documents but the ranking of the retrieved documents differ. Therefore, the simple intersection method assigns a score of 1.0 while in the discounted ranked score, the score will be less than 1.0 (due to ranking differences between the result lists).",
"We now define the metrics more precisely. Using the above notations, without loss of generality, we assume that INLINEFORM0 . Sera is defined as follows: INLINEFORM1 ",
"To also account for the ranked position differences, we modify this score to discount rewards based on rank differences. That is, in ideal score, we want search results from candidate summary ( INLINEFORM0 ) to be the same as results for gold-standard summaries ( INLINEFORM1 ) and the rankings of the results also be the same. If the rankings differ, we discount the reward by log of the differences of the ranks. More specifically, the discounted score (Sera-Dis) is defined as: INLINEFORM2 ",
"where, as previously defined, INLINEFORM0 , INLINEFORM1 and INLINEFORM2 are total number of human gold summaries, result list for the candidate summary and result list for the human gold summary, respectively. In addition, INLINEFORM3 shows the INLINEFORM4 th results in the ranked list INLINEFORM5 and INLINEFORM6 is the maximum attainable score used as the normalizing factor.",
"We use elasticsearch[1], an open-source search engine, for indexing and querying the articles. For retrieval model, we use the Language Modeling retrieval model with Dirichlet smoothing BIBREF6 . Since TAC 2014 benchmark is on summarization of biomedical articles, the appropriate index would be the one constructed from articles in the same domain. Therefore, we use the open access subset of Pubmed[2] which consists of published articles in biomedical literature.",
"[1]https://github.com/elastic/elasticsearch [2]PubMed is a comprehensive resource of articles and abstracts published in life sciences and biomedical literature http://www.ncbi.nlm.nih.gov/pmc/",
"We also experiment with different query (re)formulation approaches. Query reformulation is a method in Information Retrieval that aims to refine the query for better retrieval of results. Query reformulation methods often consist of removing ineffective terms and expressions from the query (query reduction) or adding terms to the query that help the retrieval (query expansion). Query reduction is specially important when queries are verbose. Since we use the summaries as queries, the queries are usually long and therefore we consider query reductions.",
"In our experiments, the query reformulation is done by 3 different ways: (i) Plain: The entire summary without stopwords and numeric values; (ii) Noun Phrases (NP): We only keep the noun phrases as informative concepts in the summary and eliminate all other terms; and (iii) Keywords (KW): We only keep the keywords and key phrases in the summary. For extracting the keywords and keyphrases (with length of up to 3 terms), we extract expressions whose idf[1] values is higher than a predefined threshold that is set as a parameter. We set this threshold to the average idf values of all terms except stopwords. idf values are calculated on the same index that is used for the retrieval.",
"[1]Inverted Document Frequency",
"We hypothesize that using only informative concepts in the summary prevents query drift and leads to retrieval of more relevant documents. Noun phrases and keywords are two heuristics for identifying the informative concepts."
],
[
"To the best of our knowledge, the only scientific summarization benchmark is from TAC 2014 summarization track. For evaluating the effectiveness of Rouge variants and our metric (Sera), we use this benchmark, which consists of 20 topics each with a biomedical journal article and 4 gold human written summaries."
],
[
"In the TAC 2014 summarization track, Rouge was suggested as the evaluation metric for summarization and no human assessment was provided for the topics. Therefore, to study the effectiveness of the evaluation metrics, we use the semi-manual Pyramid evaluation framework BIBREF7 , BIBREF8 . In the pyramid scoring, the content units in the gold human written summaries are organized in a pyramid. In this pyramid, the content units are organized in tiers and higher tiers of the pyramid indicate higher importance. The content quality of a given candidate summary is evaluated with respect to this pyramid.",
"To analyze the quality of the evaluation metrics, following the pyramid framework, we design an annotation scheme that is based on identification of important content units. Consider the following example:",
"Endogeneous small RNAs (miRNA) were genetically screened and studied to find the miRNAs which are related to tumorigenesis.",
"In the above example, the underlined expressions are the content units that convey the main meaning of the text. We call these small units, nuggets which are phrases or concepts that are the main contributors to the content quality of the summary.",
"We asked two human annotators to review the gold summaries and extract content units in these summaries. The pyramid tiers represent the occurrences of nuggets across all the human written gold-standard summaries, and therefore the nuggets are weighted based on these tiers. The intuition is that, if a nugget occurs more frequently in the human summaries, it is a more important contributor (thus belongs to higher tier in the pyramid). Thus, if a candidate summary contains this nugget, it should be rewarded more. An example of the nuggets annotations in pyramid framework is shown in Table TABREF12 . In this example, the nugget “cell mutation” belongs to the 4th tier and it suggests that the “cell mutation” nugget is a very important representative of the content of the corresponding document.",
"Let INLINEFORM0 define the tiers of the pyramid with INLINEFORM1 being the bottom tier and INLINEFORM2 the top tier. Let INLINEFORM3 be the number of the nuggets in the candidate summary that appear in the tier INLINEFORM4 . Then the pyramid score INLINEFORM5 of the candidate summary will be: INLINEFORM6 ",
"where INLINEFORM0 is the maximum attainable score used for normalizing the scores: INLINEFORM1 ",
"where INLINEFORM0 is the total number of nuggets in the summary and INLINEFORM1 .",
"We release the pyramid annotations of the TAC 2014 dataset through a public repository[2].",
"[2]https://github.com/acohan/TAC-pyramid-Annotations",
"3.1pt"
],
[
"We study the effectiveness of Rouge and our proposed method (Sera) by analyzing the correlations with semi-manual human judgments. Very few teams participated in TAC 2014 summarization track and the official results and the review paper of TAC 2014 systems were never published. Therefore, to evaluate the effectiveness of Rouge, we applied 9 well-known summarization approaches on the TAC 2014 scientific summarization dataset. Obtained Rouge and Sera results of each of these approaches are then correlated with semi-manual human judgments. In the following, we briefly describe each of these summarization approaches.",
"LexRank BIBREF9 : LexRank finds the most important (central) sentences in a document by using random walks in a graph constructed from the document sentences. In this graph, the sentences are nodes and the similarity between the sentences determines the edges. Sentences are ranked according to their importance. Importance is measured in terms of centrality of the sentence — the total number of edges incident on the node (sentence) in the graph. The intuition behind LexRank is that a document can be summarized using the most central sentences in the document that capture its main aspects.",
"Latent Semantic Analysis (LSA) based summarization BIBREF10 : In this summarization method, Singular Value Decomposition (SVD) BIBREF11 is used for deriving latent semantic structure of the document. The document is divided into sentences and a term-sentence matrix INLINEFORM0 is constructed. The matrix INLINEFORM1 is then decomposed into a number of linearly-independent singular vectors which represent the latent concepts in the document. This method, intuitively, decomposes the document into several latent topics and then selects the most representative sentences for each of these topics as the summary of the document.",
"Maximal Marginal Relevance (MMR) BIBREF12 : Maximal Marginal Relevance (MMR) is a greedy strategy for selecting sentences for the summary. Sentences are added iteratively to the summary based on their relatedness to the document as well as their novelty with respect to the current summary.",
"Citation based summarization BIBREF13 : In this method, citations are used for summarizing an article. Using the LexRank algorithm on the citation network of the article, top sentences are selected for the final summary.",
"Using frequency of the words BIBREF14 : In this method, which is one the earliest works in text summarization, raw word frequencies are used to estimate the saliency of sentences in the document. The most salient sentences are chosen for the final summary.",
"SumBasic BIBREF15 : SumBasic is an approach that weights sentences based on the distribution of words that is derived from the document. Sentence selection is applied iteratively by selecting words with highest probability and then finding the highest scoring sentence that contains that word. The word weights are updated after each iteration to prevent selection of similar sentences.",
"Summarization using citation-context and discourse structure BIBREF16 : In this method, the set of citations to the article are used to find the article sentences that directly reflect those citations (citation-contexts). In addition, the scientific discourse of the article is utilized to capture different aspects of the article. The scientific discourse usually follows a structure in which the authors first describe their hypothesis, then the methods, experiment, results and implications. Sentence selection is based on finding the most important sentences in each of the discourse facets of the document using the MMR heuristic.",
"KL Divergence BIBREF17 In this method, the document unigram distribution INLINEFORM0 and the summary unigram distributation INLINEFORM1 are considered; the goal is to find a summary whose distribution is very close to the document distribution. The difference of the distributions is captured by the Kullback-Lieber (KL) divergence, denoted by INLINEFORM2 .",
"Summarization based on Topic Models BIBREF17 : Instead of using unigram distributions for modeling the content distribution of the document and the summary, this method models the document content using an LDA based topic model BIBREF18 . It then uses the KL divergence between the document and the summary content models for selecting sentences for the summary."
],
[
"We calculated all variants of Rouge scores, our proposed metric, Sera, and the Pyramid score on the generated summaries from the summarizers described in Section SECREF13 . We do not report the Rouge, Sera or pyramid scores of individual systems as it is not the focus of this study. Our aim is to analyze the effectiveness of the evaluation metrics, not the summarization approaches. Therefore, we consider the correlations of the automatic evaluation metrics with the manual Pyramid scores to evaluate their effectiveness; the metrics that show higher correlations with manual judgments are more effective.",
"Table TABREF23 shows the Pearson, Spearman and Kendall correlation of Rouge and Sera, with pyramid scores. Both Rouge and Sera are calculated with stopwords removed and with stemming. Our experiments with inclusion of stopwords and without stemming showed similar results and thus, we do not include those to avoid redundancy."
],
[
"The results of our proposed method (Sera) are shown in the bottom part of Table TABREF23 . In general, Sera shows better correlation with pyramid scores in comparison with Rouge. We observe that the Pearson correlation of Sera with cut-off point of 5 (shown by Sera-5) is 0.823 which is higher than most of the Rouge variants. Similarly, the Spearman and Kendall correlations of the Sera evaluation score is 0.941 and 0.857 respectively, which are higher than all Rouge correlation values. This shows the effectiveness of the simple variant of our proposed summarization evaluation metric.",
"Table TABREF23 also shows the results of other Sera variants including discounting and query reformulation methods. Some of these variants are the result of applying query reformulation in the process of document retrieval which are described in section SECREF3 As illustrated, the Noun Phrases (NP) query reformulation at cut-off point of 5 (shown as Sera-np-5) achieves the highest correlations among all the Sera variants ( INLINEFORM0 = INLINEFORM1 , INLINEFORM2 = INLINEFORM3 = INLINEFORM4 ). In the case of Keywords (KW) query reformulation, without using discounting, we can see that there is no positive gain in correlation. However, keywords when applied on the discounted variant of Sera, result in higher correlations.",
"Discounting has more positive effect when applied on query reformulation-based Sera than on the simple variant of Sera. In the case of discounting and NP query reformulation (Sera-dis-np), we observe higher correlations in comparison with simple Sera. Similarly, in the case of Keywords (KW), positive correlation gain is obtained in most of correlation coefficients. NP without discounting and at cut-off point of 5 (Sera-np-5) shows the highest non-parametric correlation. In addition, the discounted NP at cut-off point of 10 (Sera-np-dis-10) shows the highest parametric correlations.",
"In general, using NP and KW as heuristics for finding the informative concepts in the summary effectively increases the correlations with the manual scores. Selecting informative terms from long queries results in more relevant documents and prevents query drift. Therefore, the overall similarity between the two summaries (candidate and the human written gold summary) is better captured."
],
[
"Another important observation is regarding the effectiveness of Rouge scores (top part of Table TABREF23 ). Interestingly, we observe that many variants of Rouge scores do not have high correlations with human pyramid scores. The lowest F-score correlations are for Rouge-1 and Rouge-L (with INLINEFORM0 =0.454). Weak correlation of Rouge-1 shows that matching unigrams between the candidate summary and gold summaries is not accurate in quantifying the quality of the summary. On higher order n-grams, however, we can see that Rouge correlates better with pyramid. In fact, the highest overall INLINEFORM1 is obtained by Rouge-3. Rouge-L and its weighted version Rouge-W, both have weak correlations with pyramid. Skip-bigrams (Rouge-S) and its combination with unigrams (Rouge-SU) also show sub-optimal correlations. Note that INLINEFORM2 and INLINEFORM3 correlations are more reliable in our setup due to the small sample size.",
"These results confirm our initial hypothesis that Rouge is not accurate estimator of the quality of the summary in scientific summarization. We attribute this to the differences of scientific summarization with general domain summaries. When humans summarize a relatively long research paper, they might use different terminology and paraphrasing. Therefore, Rouge which only relies on term matching between a candidate and a gold summary, is not accurate in quantifying the quality of the candidate summary."
],
[
"Table TABREF25 shows correlations of our metric Sera with Rouge-2 and Rouge-3, which are the highest correlated Rouge variants with pyramid. We can see that in general, the correlation is not strong. Keyword based reduction variants are the only variants for which the correlation with Rouge is high. Looking at the correlations of KW variants of Sera with pyramid (Table TABREF23 , bottom part), we observe that these variants are also highly correlated with manual evaluation."
],
[
"Finally, Figure FIGREF28 shows INLINEFORM0 correlation of different variants of Sera with pyramid based on selection of different cut-off points ( INLINEFORM1 and INLINEFORM2 correlations result in very similar graphs). When the cut-off point increases, more documents are retrieved for the candidate and the gold summaries, and therefore the final Sera score is more fine-grained. A general observation is that as the search cut-off point increases, the correlation with pyramid scores decreases. This is because when the retrieved result list becomes larger, the probability of including less related documents increases which negatively affects correct estimation of the similarity of the candidate and gold summaries. The most accurate estimations are for metrics with cut-off points of 5 and 10 which are included in the reported results of all variants in Table TABREF23 ."
],
[
"Rouge BIBREF1 assesses the content quality of a candidate summary with respect to a set of human gold summaries based on their lexical overlaps. Rouge consists of several variants. Since its introduction, Rouge has been one of the most widely reported metrics in the summarization literature, and its high adoption has been due to its high correlation with human assessment scores in DUC datasets BIBREF1 . However, later research has casted doubts about the accuracy of Rouge against manual evaluations. conroy2008mind analyzed DUC 2005 to 2007 data and showed that while some systems achieve high Rouge scores with respect to human summaries, the linguistic and responsiveness scores of those systems do not correspond to the high Rouge scores.",
"We studied the effectiveness of Rouge through correlation analysis with manual scores. Besides correlation with human assessment scores, other approaches have been explored for analyzing the effectiveness of summarization evaluation. Rankel:2011 studied the extent to which a metric can distinguish between the human and system generated summaries. They also proposed the use of paired two-sample t-tests and the Wilcoxon signed-rank test as an alternative to Rouge in evaluating several summarizers. Similarly, owczarzak2012assessment proposed the use of multiple binary significance tests between the system summaries for ranking the best summarizers.",
"Since introduction of Rouge, there have been other efforts for improving automatic summarization evaluation. hovy2006automated proposed an approach based on comparison of so called Basic Elements (BE) between the candidate and reference summaries. BEs were extracted based on syntactic structure of the sentence. The work by conroy2011nouveau was another attempt for improving Rouge for update summarization which combined two different Rouge variants and showed higher correlations with manual judgments for TAC 2008 update summaries.",
"Apart from the content, other aspects of summarization such as linguistic quality have been also studied. pitler2010automatic evaluated a set of models based on syntactic features, language models and entity coherences for assessing the linguistic quality of the summaries. Machine translation evaluation metrics such as blue have also been compared and contrasted against Rouge BIBREF19 . Despite these works, when gold-standard summaries are available, Rouge is still the most common evaluation metric that is used in the summarization published research. Apart from Rouge's initial good results on the newswire data, the availability of the software and its efficient performance have further contributed to its popularity."
],
[
"We provided an analysis of existing evaluation metrics for scientific summarization with evaluation of all variants of Rouge. We showed that Rouge may not be the best metric for summarization evaluation; especially in summaries with high terminology variations and paraphrasing (e.g. scientific summaries). Furthermore, we showed that different variants of Rouge result in different correlation values with human judgments, indicating that not all Rouge scores are equally effective. Among all variants of Rouge, Rouge-2 and Rouge-3 are better correlated with manual judgments in the context of scientific summarization. We furthermore proposed an alternative and more effective approach for scientific summarization evaluation (Summarization Evaluation by Relevance Analysis - Sera). Results revealed that in general, the proposed evaluation metric achieves higher correlations with semi-manual pyramid evaluation scores in comparison with Rouge.",
"Our analysis on the effectiveness of evaluation measures for scientific summaries was performed using correlations with manual judgments. An alternative approach to follow would be to use statistical significance testing on the ability of the metrics to distinguish between the summarizers (similar to Rankel:2011). We studied the effectiveness of existing summarization evaluation metrics in the scientific text genre and proposed an alternative superior metric. Another extension of this work would be to evaluate automatic summarization evaluation in other genres of text (such as social media). Our proposed method only evaluates the content quality of the summary. Similar to most of existing summarization evaluation metrics, other qualities such as linguistic cohesion, coherence and readability are not captured by this method. Developing metrics that also incorporate these qualities is yet another future direction to follow."
],
[
"We would like to thank all three anonymous reviewers for their feedback and comments, and Maryam Iranmanesh for helping in annotation. This work was partially supported by National Science Foundation (NSF) through grant CNS-1204347."
]
],
"section_name": [
"Introduction",
"Summarization evaluation by Rouge",
"Summarization Evaluation by Relevance Analysis (Sera)",
"Data",
"Annotations",
"Summarization approaches",
"Results and Discussion",
"Sera",
"Rouge",
"Correlation of Sera with Rouge",
"Effect of the rank cut-off point",
"Related work",
"Conclusions",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"31de5bc2c821ab31c1a3e2fc1e65611c69df43c8",
"69fd741025d93f9a26f905d915d391baf63446b6"
],
"answer": [
{
"evidence": [
"To the best of our knowledge, the only scientific summarization benchmark is from TAC 2014 summarization track. For evaluating the effectiveness of Rouge variants and our metric (Sera), we use this benchmark, which consists of 20 topics each with a biomedical journal article and 4 gold human written summaries.",
"To analyze the quality of the evaluation metrics, following the pyramid framework, we design an annotation scheme that is based on identification of important content units. Consider the following example:",
"Endogeneous small RNAs (miRNA) were genetically screened and studied to find the miRNAs which are related to tumorigenesis."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"To the best of our knowledge, the only scientific summarization benchmark is from TAC 2014 summarization track. For evaluating the effectiveness of Rouge variants and our metric (Sera), we use this benchmark, which consists of 20 topics each with a biomedical journal article and 4 gold human written summaries.",
"Consider the following example:\n\nEndogeneous small RNAs (miRNA) were genetically screened and studied to find the miRNAs which are related to tumorigenesis."
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"cb3c9b611db8cbb26cc90c6873153681b30e0d65",
"f666db86b536e6964b96184ab29ff949a13dce16"
],
"answer": [
{
"evidence": [
"Our proposed metric is based on analysis of the content relevance between a system generated summary and the corresponding human written gold-standard summaries. On high level, we indirectly evaluate the content relevance between the candidate summary and the human summary using information retrieval. To accomplish this, we use the summaries as search queries and compare the overlaps of the retrieved results. Larger number of overlaps, suggest that the candidate summary has higher content quality with respect to the gold-standard. This method, enables us to also reward for terms that are not lexically equivalent but semantically related. Our method is based on the well established linguistic premise that semantically related words occur in similar contexts BIBREF5 . The context of the words can be considered as surrounding words, sentences in which they appear or the documents. For scientific summarization, we consider the context of the words as the scientific articles in which they appear. Thus, if two concepts appear in identical set of articles, they are semantically related. We consider the two summaries as similar if they refer to same set of articles even if the two summaries do not have high lexical overlaps. To capture if a summary relates to a article, we use information retrieval by considering the summaries as queries and the articles as documents and we rank the articles based on their relatedness to a given summary. For a given pair of system summary and the gold summary, similar rankings of the retrieved articles suggest that the summaries are semantically related, and thus the system summary is of higher quality."
],
"extractive_spans": [],
"free_form_answer": "The content relevance between the candidate summary and the human summary is evaluated using information retrieval - using the summaries as search queries and compare the overlaps of the retrieved results. ",
"highlighted_evidence": [
" On high level, we indirectly evaluate the content relevance between the candidate summary and the human summary using information retrieval. To accomplish this, we use the summaries as search queries and compare the overlaps of the retrieved results. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Our proposed metric is based on analysis of the content relevance between a system generated summary and the corresponding human written gold-standard summaries. On high level, we indirectly evaluate the content relevance between the candidate summary and the human summary using information retrieval. To accomplish this, we use the summaries as search queries and compare the overlaps of the retrieved results. Larger number of overlaps, suggest that the candidate summary has higher content quality with respect to the gold-standard. This method, enables us to also reward for terms that are not lexically equivalent but semantically related. Our method is based on the well established linguistic premise that semantically related words occur in similar contexts BIBREF5 . The context of the words can be considered as surrounding words, sentences in which they appear or the documents. For scientific summarization, we consider the context of the words as the scientific articles in which they appear. Thus, if two concepts appear in identical set of articles, they are semantically related. We consider the two summaries as similar if they refer to same set of articles even if the two summaries do not have high lexical overlaps. To capture if a summary relates to a article, we use information retrieval by considering the summaries as queries and the articles as documents and we rank the articles based on their relatedness to a given summary. For a given pair of system summary and the gold summary, similar rankings of the retrieved articles suggest that the summaries are semantically related, and thus the system summary is of higher quality."
],
"extractive_spans": [
"On high level, we indirectly evaluate the content relevance between the candidate summary and the human summary using information retrieval."
],
"free_form_answer": "",
"highlighted_evidence": [
"On high level, we indirectly evaluate the content relevance between the candidate summary and the human summary using information retrieval. To accomplish this, we use the summaries as search queries and compare the overlaps of the retrieved results. Larger number of overlaps, suggest that the candidate summary has higher content quality with respect to the gold-standard."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"13927e69877e1c05982976927f3521d046df8769",
"5ce130dad2e0afcbf3a91f820ad1b472cb075570"
],
"answer": [
{
"evidence": [
"Table TABREF23 shows the Pearson, Spearman and Kendall correlation of Rouge and Sera, with pyramid scores. Both Rouge and Sera are calculated with stopwords removed and with stemming. Our experiments with inclusion of stopwords and without stemming showed similar results and thus, we do not include those to avoid redundancy.",
"Another important observation is regarding the effectiveness of Rouge scores (top part of Table TABREF23 ). Interestingly, we observe that many variants of Rouge scores do not have high correlations with human pyramid scores. The lowest F-score correlations are for Rouge-1 and Rouge-L (with INLINEFORM0 =0.454). Weak correlation of Rouge-1 shows that matching unigrams between the candidate summary and gold summaries is not accurate in quantifying the quality of the summary. On higher order n-grams, however, we can see that Rouge correlates better with pyramid. In fact, the highest overall INLINEFORM1 is obtained by Rouge-3. Rouge-L and its weighted version Rouge-W, both have weak correlations with pyramid. Skip-bigrams (Rouge-S) and its combination with unigrams (Rouge-SU) also show sub-optimal correlations. Note that INLINEFORM2 and INLINEFORM3 correlations are more reliable in our setup due to the small sample size."
],
"extractive_spans": [
"we observe that many variants of Rouge scores do not have high correlations with human pyramid scores"
],
"free_form_answer": "",
"highlighted_evidence": [
"Table TABREF23 shows the Pearson, Spearman and Kendall correlation of Rouge and Sera, with pyramid scores.",
"Interestingly, we observe that many variants of Rouge scores do not have high correlations with human pyramid scores. The lowest F-score correlations are for Rouge-1 and Rouge-L (with INLINEFORM0 =0.454). Weak correlation of Rouge-1 shows that matching unigrams between the candidate summary and gold summaries is not accurate in quantifying the quality of the summary."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We provided an analysis of existing evaluation metrics for scientific summarization with evaluation of all variants of Rouge. We showed that Rouge may not be the best metric for summarization evaluation; especially in summaries with high terminology variations and paraphrasing (e.g. scientific summaries). Furthermore, we showed that different variants of Rouge result in different correlation values with human judgments, indicating that not all Rouge scores are equally effective. Among all variants of Rouge, Rouge-2 and Rouge-3 are better correlated with manual judgments in the context of scientific summarization. We furthermore proposed an alternative and more effective approach for scientific summarization evaluation (Summarization Evaluation by Relevance Analysis - Sera). Results revealed that in general, the proposed evaluation metric achieves higher correlations with semi-manual pyramid evaluation scores in comparison with Rouge.",
"FLOAT SELECTED: Table 2: Correlation between variants of ROUGE and SERA, with human pyramid scores. All variants of ROUGE are displayed. F : F-Score; R: Recall; P : Precision; DIS: Discounted variant of SERA; KW: using Keyword query reformulation; NP: Using noun phrases for query reformulation. The numbers in front of the SERA metrics indicate the rank cut-off point."
],
"extractive_spans": [],
"free_form_answer": "Using Pearson corelation measure, for example, ROUGE-1-P is 0.257 and ROUGE-3-F 0.878.",
"highlighted_evidence": [
"Furthermore, we showed that different variants of Rouge result in different correlation values with human judgments, indicating that not all Rouge scores are equally effective. Among all variants of Rouge, Rouge-2 and Rouge-3 are better correlated with manual judgments in the context of scientific summarization. ",
"FLOAT SELECTED: Table 2: Correlation between variants of ROUGE and SERA, with human pyramid scores. All variants of ROUGE are displayed. F : F-Score; R: Recall; P : Precision; DIS: Discounted variant of SERA; KW: using Keyword query reformulation; NP: Using noun phrases for query reformulation. The numbers in front of the SERA metrics indicate the rank cut-off point."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"458dea27690b2f6d5fff601326fe674fff5c1666",
"87581826c3b99f62fdee0e6605b25b2d4695b872"
],
"answer": [
{
"evidence": [
"In the TAC 2014 summarization track, Rouge was suggested as the evaluation metric for summarization and no human assessment was provided for the topics. Therefore, to study the effectiveness of the evaluation metrics, we use the semi-manual Pyramid evaluation framework BIBREF7 , BIBREF8 . In the pyramid scoring, the content units in the gold human written summaries are organized in a pyramid. In this pyramid, the content units are organized in tiers and higher tiers of the pyramid indicate higher importance. The content quality of a given candidate summary is evaluated with respect to this pyramid."
],
"extractive_spans": [
" higher tiers of the pyramid"
],
"free_form_answer": "",
"highlighted_evidence": [
" In the pyramid scoring, the content units in the gold human written summaries are organized in a pyramid. In this pyramid, the content units are organized in tiers and higher tiers of the pyramid indicate higher importance. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In the TAC 2014 summarization track, Rouge was suggested as the evaluation metric for summarization and no human assessment was provided for the topics. Therefore, to study the effectiveness of the evaluation metrics, we use the semi-manual Pyramid evaluation framework BIBREF7 , BIBREF8 . In the pyramid scoring, the content units in the gold human written summaries are organized in a pyramid. In this pyramid, the content units are organized in tiers and higher tiers of the pyramid indicate higher importance. The content quality of a given candidate summary is evaluated with respect to this pyramid.",
"To analyze the quality of the evaluation metrics, following the pyramid framework, we design an annotation scheme that is based on identification of important content units. Consider the following example:"
],
"extractive_spans": [
"following the pyramid framework, we design an annotation scheme"
],
"free_form_answer": "",
"highlighted_evidence": [
"Therefore, to study the effectiveness of the evaluation metrics, we use the semi-manual Pyramid evaluation framework BIBREF7 , BIBREF8 . In the pyramid scoring, the content units in the gold human written summaries are organized in a pyramid. In this pyramid, the content units are organized in tiers and higher tiers of the pyramid indicate higher importance. The content quality of a given candidate summary is evaluated with respect to this pyramid.",
"To analyze the quality of the evaluation metrics, following the pyramid framework, we design an annotation scheme that is based on identification of important content units. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"044154572f66b0542e0ad323b030bc2f9498e270"
],
"answer": [
{
"evidence": [
"Scientific summarization has attracted more attention recently (examples include works by abu2011coherent, qazvinian2013generating, and cohan2015scientific). Thus, it is important to study the validity of existing methodologies applied to the evaluation of news article summarization for this task. In particular, we raise the important question of how effective is Rouge, as an evaluation metric for scientific summarization? We answer this question by comparing Rouge scores with semi-manual evaluation score (Pyramid) in TAC 2014 scientific summarization dataset[1]. Results reveal that, contrary to the common belief, correlations between Rouge and the Pyramid scores are weak, which challenges its effectiveness for scientific summarization. Furthermore, we show a large variance of correlations between different Rouge variants and the manual evaluations which further makes the reliability of Rouge for evaluating scientific summaries less clear. We then propose an evaluation metric based on relevance analysis of summaries which aims to overcome the limitation of high lexical dependence in Rouge. We call our metric Sera (Summarization Evaluation by Relevance Analysis). Results show that the proposed metric achieves higher and more consistent correlations with semi-manual assessment scores."
],
"extractive_spans": [
"correlations between Rouge and the Pyramid scores are weak, which challenges its effectiveness for scientific summarization"
],
"free_form_answer": "",
"highlighted_evidence": [
"Results reveal that, contrary to the common belief, correlations between Rouge and the Pyramid scores are weak, which challenges its effectiveness for scientific summarization. Furthermore, we show a large variance of correlations between different Rouge variants and the manual evaluations which further makes the reliability of Rouge for evaluating scientific summaries less clear."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five",
"five"
],
"paper_read": [
"",
"",
"",
"",
""
],
"question": [
"Do the authors report results only on English data?",
"In the proposed metric, how is content relevance measured?",
"What different correlations result when using different variants of ROUGE scores?",
"What manual Pyramid scores are used?",
"What is the common belief that this paper refutes? (c.f. 'contrary to the common belief, ROUGE is not much [sic] reliable'"
],
"question_id": [
"c49ee6ac4dc812ff84d255886fd5aff794f53c39",
"3f856097be2246bde8244add838e83a2c793bd17",
"bf52c01bf82612d0c7bbf2e6a5bb2570c322936f",
"74e866137b3452ec50fb6feaf5753c8637459e62",
"184b0082e10ce191940c1d24785b631828a9f714"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"search_query": [
"",
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
"",
""
]
} | {
"caption": [
"Table 1: Example of nugget annotation for Pyramid scores. The pyramid tier represents the number of occurrences of the nugget in all the human written gold summaries.",
"Table 2: Correlation between variants of ROUGE and SERA, with human pyramid scores. All variants of ROUGE are displayed. F : F-Score; R: Recall; P : Precision; DIS: Discounted variant of SERA; KW: using Keyword query reformulation; NP: Using noun phrases for query reformulation. The numbers in front of the SERA metrics indicate the rank cut-off point.",
"Table 3: Correlation between SERA and ROUGE scores. NP: Query reformulation with Noun Phrases; KW: Query reformulation with Keywords; DIS: Discounted variant of SERA; The numbers in front of the SERA metrics indicate the rank cut-off point.",
"Figure 1: ρ correlation of SERA with pyramid based on different cut-off points. The x-axis shows the cut-off point parameter. DIS: Discounted variant of SERA; NP: Query reformulation with Noun Phrases; KW: Query reformulation with Keywords."
],
"file": [
"4-Table1-1.png",
"5-Table2-1.png",
"6-Table3-1.png",
"7-Figure1-1.png"
]
} | [
"In the proposed metric, how is content relevance measured?",
"What different correlations result when using different variants of ROUGE scores?"
] | [
[
"1604.00400-Summarization Evaluation by Relevance Analysis (Sera)-1"
],
[
"1604.00400-Results and Discussion-1",
"1604.00400-Rouge-0",
"1604.00400-5-Table2-1.png",
"1604.00400-Conclusions-0"
]
] | [
"The content relevance between the candidate summary and the human summary is evaluated using information retrieval - using the summaries as search queries and compare the overlaps of the retrieved results. ",
"Using Pearson corelation measure, for example, ROUGE-1-P is 0.257 and ROUGE-3-F 0.878."
] | 127 |
1905.12801 | Reducing Gender Bias in Word-Level Language Models with a Gender-Equalizing Loss Function | Gender bias exists in natural language datasets which neural language models tend to learn, resulting in biased text generation. In this research, we propose a debiasing approach based on the loss function modification. We introduce a new term to the loss function which attempts to equalize the probabilities of male and female words in the output. Using an array of bias evaluation metrics, we provide empirical evidence that our approach successfully mitigates gender bias in language models without increasing perplexity. In comparison to existing debiasing strategies, data augmentation, and word embedding debiasing, our method performs better in several aspects, especially in reducing gender bias in occupation words. Finally, we introduce a combination of data augmentation and our approach, and show that it outperforms existing strategies in all bias evaluation metrics. | {
"paragraphs": [
[
"Natural Language Processing (NLP) models are shown to capture unwanted biases and stereotypes found in the training data which raise concerns about socioeconomic, ethnic and gender discrimination when these models are deployed for public use BIBREF0 , BIBREF1 .",
"There are numerous studies that identify algorithmic bias in NLP applications. BIBREF2 showed ethnic bias in Google autocomplete suggestions whereas BIBREF3 found gender bias in advertisement delivery systems. Additionally, BIBREF1 demonstrated that coreference resolution systems exhibit gender bias.",
"Language modelling is a pivotal task in NLP with important downstream applications such as text generation BIBREF4 . Recent studies by BIBREF0 and BIBREF5 have shown that this task is vulnerable to gender bias in the training corpus. Two prior works focused on reducing bias in language modelling by data preprocessing BIBREF0 and word embedding debiasing BIBREF5 . In this study, we investigate the efficacy of bias reduction during training by introducing a new loss function which encourages the language model to equalize the probabilities of predicting gendered word pairs like he and she. Although we recognize that gender is non-binary, for the purpose of this study, we focus on female and male words.",
"Our main contributions are summarized as follows: i) to our best knowledge, this study is the first one to investigate bias alleviation in text generation by direct modification of the loss function; ii) our new loss function effectively reduces gender bias in the language models during training by equalizing the probabilities of male and female words in the output; iii) we show that end-to-end debiasing of the language model can achieve word embedding debiasing; iv) we provide an interpretation of our results and draw a comparison to other existing debiasing methods. We show that our method, combined with an existing method, counterfactual data augmentation, achieves the best result and outperforms all existing methods."
],
[
"Recently, the study of bias in NLP applications has received increasing attention from researchers. Most relevant work in this domain can be broadly divided into two categories: word embedding debiasing and data debiasing by preprocessing."
],
[
"For the training data, we use Daily Mail news articles released by BIBREF9 . This dataset is composed of 219,506 articles covering a diverse range of topics including business, sports, travel, etc., and is claimed to be biased and sensational BIBREF5 . For manageability, we randomly subsample 5% of the text. The subsample has around 8.25 million tokens in total."
],
[
"We use a pre-trained 300-dimensional word embedding, GloVe, by BIBREF10 . We apply random search to the hyperparameter tuning of the LSTM language model. The best hyperparameters are as follows: 2 hidden layers each with 300 units, a sequence length of 35, a learning rate of 20 with an annealing schedule of decay starting from 0.25 to 0.95, a dropout rate of 0.25 and a gradient clip of 0.25. We train our models for 150 epochs, use a batch size of 48, and set early stopping with a patience of 5."
],
[
"Language models are usually trained using cross-entropy loss. Cross-entropy loss at time step INLINEFORM0 is INLINEFORM1 ",
"where INLINEFORM0 is the vocabulary, INLINEFORM1 is the one hot vector of ground truth and INLINEFORM2 indicates the output softmax probability of the model.",
"We introduce a loss term INLINEFORM0 , which aims to equalize the predicted probabilities of gender pairs such as woman and man. INLINEFORM1 ",
" INLINEFORM0 and INLINEFORM1 are a set of corresponding gender pairs, INLINEFORM2 is the size of the gender pairs set, and INLINEFORM3 indicates the output softmax probability. We use gender pairs provided by BIBREF7 . By considering only gender pairs we ensure that only gender information is neutralized and distribution over semantic concepts is not altered. For example, it will try to equalize the probabilities of congressman with congresswoman and actor with actress but distribution of congressman, congresswoman versus actor, actress will not be affected. Overall loss can be written as INLINEFORM4 ",
"where INLINEFORM0 is a hyperparameter and INLINEFORM1 is the corpus size. We observe that among the similar minima of the loss function, INLINEFORM2 encourages the model to converge towards a minimum that exhibits the lowest gender bias."
],
[
"Language models are evaluated using perplexity, which is a standard measure of performance for unseen data. For bias evaluation, we use an array of metrics to provide a holistic diagnosis of the model behavior under debiasing treatment. These metrics are discussed in detail below. In all the evaluation metrics requiring gender pairs, we use gender pairs provided by BIBREF7 . This list contains 223 pairs, all other words are considered gender-neutral.",
"Co-occurrence bias is computed from the model-generated texts by comparing the occurrences of all gender-neutral words with female and male words. A word is considered to be biased towards a certain gender if it occurs more frequently with words of that gender. This definition was first used by BIBREF7 and later adapted by BIBREF5 . Using the definition of gender bias similar to the one used by BIBREF5 , we define gender bias as INLINEFORM0 ",
"where INLINEFORM0 is a set of gender-neutral words, and INLINEFORM1 is the occurrences of a word INLINEFORM2 with words of gender INLINEFORM3 in the same window. This score is designed to capture unequal co-occurrences of neutral words with male and female words. Co-occurrences are computed using a sliding window of size 10 extending equally in both directions. Furthermore, we only consider words that occur more than 20 times with gendered words to exclude random effects.",
"We also evaluate a normalized version of INLINEFORM0 which we denote by conditional co-occurrence bias, INLINEFORM1 . This is defined as INLINEFORM2 ",
"where INLINEFORM0 ",
" INLINEFORM0 is less affected by the disparity in the general distribution of male and female words in the text. The disparity between the occurrences of the two genders means that text is more inclined to mention one over the other, so it can also be considered a form of bias. We report the ratio of occurrence of male and female words in the model generated text, INLINEFORM1 , as INLINEFORM2 ",
"Another way of quantifying bias in NLP models is based on the idea of causal testing. The model is exposed to paired samples which differ only in one attribute (e.g. gender) and the disparity in the output is interpreted as bias related to that attribute. BIBREF1 and BIBREF0 applied this method to measure bias in coreference resolution and BIBREF0 also used it for evaluating gender bias in language modelling.",
"Following the approach similar to BIBREF0 , we limit this bias evaluation to a set of gender-neutral occupations. We create a list of sentences based on a set of templates. There are two sets of templates used for evaluating causal occupation bias (Table TABREF7 ). The first set of templates is designed to measure how the probabilities of occupation words depend on the gender information in the seed. Below is an example of the first set of templates: INLINEFORM0 ",
"Here, the vertical bar separates the seed sequence that is fed into the language models from the target occupation, for which we observe the output softmax probability. We measure causal occupation bias conditioned on gender as INLINEFORM0 ",
"where INLINEFORM0 is a set of gender-neutral occupations and INLINEFORM1 is the size of the gender pairs set. For example, INLINEFORM2 is the softmax probability of the word INLINEFORM3 where the seed sequence is He is a. The second set of templates like below, aims to capture how the probabilities of gendered words depend on the occupation words in the seed. INLINEFORM4 ",
"Causal occupation bias conditioned on occupation is represented as INLINEFORM0 ",
"where INLINEFORM0 is a set of gender-neutral occupations and INLINEFORM1 is the size of the gender pairs set. For example, INLINEFORM2 is the softmax probability of man where the seed sequence is The doctor is a.",
"We believe that both INLINEFORM0 and INLINEFORM1 contribute to gender bias in the model-generated texts. We also note that INLINEFORM2 is more easily influenced by the general disparity in male and female word probabilities.",
"Our debiasing approach does not explicitly address the bias in the embedding layer. Therefore, we use gender-neutral occupations to measure the embedding bias to observe if debiasing the output layer also decreases the bias in the embedding. We define the embedding bias, INLINEFORM0 , as the difference between the Euclidean distance of an occupation word to male words and the distance of the occupation word to the female counterparts. This definition is equivalent to bias by projection described by BIBREF6 . We define INLINEFORM1 as INLINEFORM2 ",
"where INLINEFORM0 is a set of gender-neutral occupations, INLINEFORM1 is the size of the gender pairs set and INLINEFORM2 is the word-to-vector dictionary."
],
[
"We apply CDA where we swap all the gendered words using a bidirectional dictionary of gender pairs described by BIBREF0 . This creates a dataset twice the size of the original data, with exactly the same contextual distributions for both genders and we use it to train the language models.",
"We also implement the bias regularization method of BIBREF5 which debiases the word embedding during language model training by minimizing the projection of neutral words on the gender axis. We use hyperparameter tuning to find the best regularization coefficient and report results from the model trained with this coefficient. We later refer to this strategy as REG."
],
[
"Initially, we measure the co-occurrence bias in the training data. After training the baseline model, we implement our loss function and tune for the INLINEFORM0 hyperparameter. We test the existing debiasing approaches, CDA and REG, as well but since BIBREF5 reported that results fluctuate substantially with different REG regularization coefficients, we perform hyperparameter tuning and report the best results in Table TABREF12 . Additionally, we implement a combination of our loss function and CDA and tune for INLINEFORM1 . Finally, bias evaluation is performed for all the trained models. Causal occupation bias is measured directly from the models using template datasets discussed above and co-occurrence bias is measured from the model-generated texts, which consist of 10,000 documents of 500 words each."
],
[
"Results for the experiments are listed in Table TABREF12 . It is interesting to observe that the baseline model amplifies the bias in the training data set as measured by INLINEFORM0 and INLINEFORM1 . From measurements using the described bias metrics, our method effectively mitigates bias in language modelling without a significant increase in perplexity. At INLINEFORM2 value of 1, it reduces INLINEFORM3 by 58.95%, INLINEFORM4 by 45.74%, INLINEFORM5 by 100%, INLINEFORM6 by 98.52% and INLINEFORM7 by 98.98%. Compared to the results of CDA and REG, it achieves the best results in both occupation biases, INLINEFORM8 and INLINEFORM9 , and INLINEFORM10 . We notice that all methods result in INLINEFORM11 around 1, indicating that there are near equal amounts of female and male words in the generated texts. In our experiments we note that with increasing INLINEFORM12 , the bias steadily decreases and perplexity tends to slightly increase. This indicates that there is a trade-off between bias and perplexity.",
"REG is not very effective in mitigating bias when compared to other methods, and fails to achieve the best result in any of the bias metrics that we used. But REG results in the best perplexity and even does better than the baseline model in this respect. This indicates that REG has a slight regularization effect. Additionally, it is interesting to note that our loss function outperforms REG in INLINEFORM0 even though REG explicitly aims to reduce gender bias in the embeddings. Although our method does not explicitly attempt geometric debiasing of the word embedding, the results show that it results in the most debiased embedding as compared to other methods. Furthermore, BIBREF8 emphasizes that geometric gender bias in word embeddings is not completely understood and existing word embedding debiasing strategies are insufficient. Our approach provides an appealing end-to-end solution for model debiasing without relying on any measure of bias in the word embedding. We believe this concept is generalizable to other NLP applications.",
"Our method outperforms CDA in INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 . While CDA achieves slightly better results for co-occurrence biases, INLINEFORM3 and INLINEFORM4 , and results in a better perplexity. With a marginal differences, our results are comparable to those of CDA and both models seem to have similar bias mitigation effects. However, our method does not require a data augmentation step and allows training of an unbiased model directly from biased datasets. For this reason, it also requires less time to train than CDA since its training data has a smaller size without data augmentation. Furthermore, CDA fails to effectively mitigate occupation bias when compared to our approach. Although the training data for CDA does not contain gender bias, the model still exhibits some gender bias when measured with our causal occupation bias metrics. This reinforces the concept that some model-level constraints are essential to debiasing a model and dataset debiasing alone cannot be trusted.",
"Finally, we note that the combination of CDA and our loss function outperforms all the methods in all measures of biases without compromising perplexity. Therefore, it can be argued that a cascade of these approaches can be used to optimally debias the language models."
],
[
"In this research, we propose a new approach for mitigating gender bias in neural language models and empirically show its effectiveness in reducing bias as measured with different evaluation metrics. Our research also highlights the fact that debiasing the model with bias penalties in the loss function is an effective method. We emphasize that loss function based debiasing is powerful and generalizable to other downstream NLP applications. The research also reinforces the idea that geometric debiasing of the word embedding is not a complete solution for debiasing the downstream applications but encourages end-to-end approaches to debiasing.",
"All the debiasing techniques experimented in this paper rely on a predefined set of gender pairs in some way. CDA used gender pairs for flipping, REG uses it for gender space definition and our technique uses them for computing loss. This reliance on pre-defined set of gender pairs can be considered a limitation of these methods. It also results in another concern. There are gender associated words which do not have pairs, like pregnant. These words are not treated properly by techniques relying on gender pairs.",
"Future work includes designing a context-aware version of our loss function which can distinguish between the unbiased and biased mentions of the gendered words and only penalize the biased version. Another interesting direction is exploring the application of this method in mitigating racial bias which brings more challenges."
],
[
"We are grateful to Sam Bowman for helpful advice, Shikha Bordia, Cuiying Yang, Gang Qian, Xiyu Miao, Qianyi Fan, Tian Liu, and Stanislav Sobolevsky for discussions, and reviewers for detailed feedback. "
]
],
"section_name": [
"Introduction",
"Related Work",
"Dataset",
"Language Model",
"Loss Function",
"Model Evaluation",
"Existing Approaches",
"Experiments",
"Results",
"Conclusion and Discussion",
"Acknowledgment"
]
} | {
"answers": [
{
"annotation_id": [
"063ddd8b2727842124e65759626f7117793aaf53",
"d09361afd7a795d94c4f04c2d728ec553d797971"
],
"answer": [
{
"evidence": [
"Initially, we measure the co-occurrence bias in the training data. After training the baseline model, we implement our loss function and tune for the INLINEFORM0 hyperparameter. We test the existing debiasing approaches, CDA and REG, as well but since BIBREF5 reported that results fluctuate substantially with different REG regularization coefficients, we perform hyperparameter tuning and report the best results in Table TABREF12 . Additionally, we implement a combination of our loss function and CDA and tune for INLINEFORM1 . Finally, bias evaluation is performed for all the trained models. Causal occupation bias is measured directly from the models using template datasets discussed above and co-occurrence bias is measured from the model-generated texts, which consist of 10,000 documents of 500 words each."
],
"extractive_spans": [
"CDA",
"REG"
],
"free_form_answer": "",
"highlighted_evidence": [
"We test the existing debiasing approaches, CDA and REG, as well but since BIBREF5 reported that results fluctuate substantially with different REG regularization coefficients, we perform hyperparameter tuning and report the best results in Table TABREF12 ."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"82c362a9c658a6dc3a417ddcd9fd2189f50c1d14",
"90a55c7c8bfa545a85ba21756b124ac5940c647d"
],
"answer": [
{
"evidence": [
"For the training data, we use Daily Mail news articles released by BIBREF9 . This dataset is composed of 219,506 articles covering a diverse range of topics including business, sports, travel, etc., and is claimed to be biased and sensational BIBREF5 . For manageability, we randomly subsample 5% of the text. The subsample has around 8.25 million tokens in total."
],
"extractive_spans": [
"Daily Mail news articles released by BIBREF9 "
],
"free_form_answer": "",
"highlighted_evidence": [
"For the training data, we use Daily Mail news articles released by BIBREF9 . "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"For the training data, we use Daily Mail news articles released by BIBREF9 . This dataset is composed of 219,506 articles covering a diverse range of topics including business, sports, travel, etc., and is claimed to be biased and sensational BIBREF5 . For manageability, we randomly subsample 5% of the text. The subsample has around 8.25 million tokens in total."
],
"extractive_spans": [
"Daily Mail news articles"
],
"free_form_answer": "",
"highlighted_evidence": [
"For the training data, we use Daily Mail news articles released by BIBREF9 ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"f7ebe9936a981d9c1e4db1b7f5374b63307a6daa"
],
"answer": [
{
"evidence": [
"Language modelling is a pivotal task in NLP with important downstream applications such as text generation BIBREF4 . Recent studies by BIBREF0 and BIBREF5 have shown that this task is vulnerable to gender bias in the training corpus. Two prior works focused on reducing bias in language modelling by data preprocessing BIBREF0 and word embedding debiasing BIBREF5 . In this study, we investigate the efficacy of bias reduction during training by introducing a new loss function which encourages the language model to equalize the probabilities of predicting gendered word pairs like he and she. Although we recognize that gender is non-binary, for the purpose of this study, we focus on female and male words."
],
"extractive_spans": [
"gendered word pairs like he and she"
],
"free_form_answer": "",
"highlighted_evidence": [
"In this study, we investigate the efficacy of bias reduction during training by introducing a new loss function which encourages the language model to equalize the probabilities of predicting gendered word pairs like he and she. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"40ddfd02bad4b96ae41a797006fe0522b4c30b75",
"bd7cfdd9b805e4a4e229bdd2f77db781c1178a0c"
],
"answer": [
{
"evidence": [
"Results for the experiments are listed in Table TABREF12 . It is interesting to observe that the baseline model amplifies the bias in the training data set as measured by INLINEFORM0 and INLINEFORM1 . From measurements using the described bias metrics, our method effectively mitigates bias in language modelling without a significant increase in perplexity. At INLINEFORM2 value of 1, it reduces INLINEFORM3 by 58.95%, INLINEFORM4 by 45.74%, INLINEFORM5 by 100%, INLINEFORM6 by 98.52% and INLINEFORM7 by 98.98%. Compared to the results of CDA and REG, it achieves the best results in both occupation biases, INLINEFORM8 and INLINEFORM9 , and INLINEFORM10 . We notice that all methods result in INLINEFORM11 around 1, indicating that there are near equal amounts of female and male words in the generated texts. In our experiments we note that with increasing INLINEFORM12 , the bias steadily decreases and perplexity tends to slightly increase. This indicates that there is a trade-off between bias and perplexity."
],
"extractive_spans": [],
"free_form_answer": "Using INLINEFORM0 and INLINEFORM1",
"highlighted_evidence": [
"It is interesting to observe that the baseline model amplifies the bias in the training data set as measured by INLINEFORM0 and INLINEFORM1 . From measurements using the described bias metrics, our method effectively mitigates bias in language modelling without a significant increase in perplexity."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"0d915063c8845886649a62cd7355f232cc876bf1",
"fbb8ef1ce32852cdbc21914a5eb58499f5adf13c"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [
"Co-occurrence bias is computed from the model-generated texts by comparing the occurrences of all gender-neutral words with female and male words. A word is considered to be biased towards a certain gender if it occurs more frequently with words of that gender. This definition was first used by BIBREF7 and later adapted by BIBREF5 . Using the definition of gender bias similar to the one used by BIBREF5 , we define gender bias as INLINEFORM0",
"where INLINEFORM0 is a set of gender-neutral words, and INLINEFORM1 is the occurrences of a word INLINEFORM2 with words of gender INLINEFORM3 in the same window. This score is designed to capture unequal co-occurrences of neutral words with male and female words. Co-occurrences are computed using a sliding window of size 10 extending equally in both directions. Furthermore, we only consider words that occur more than 20 times with gendered words to exclude random effects.",
"We also evaluate a normalized version of INLINEFORM0 which we denote by conditional co-occurrence bias, INLINEFORM1 . This is defined as INLINEFORM2",
"INLINEFORM0 is less affected by the disparity in the general distribution of male and female words in the text. The disparity between the occurrences of the two genders means that text is more inclined to mention one over the other, so it can also be considered a form of bias. We report the ratio of occurrence of male and female words in the model generated text, INLINEFORM1 , as INLINEFORM2",
"Causal occupation bias conditioned on occupation is represented as INLINEFORM0",
"Here, the vertical bar separates the seed sequence that is fed into the language models from the target occupation, for which we observe the output softmax probability. We measure causal occupation bias conditioned on gender as INLINEFORM0",
"where INLINEFORM0 is a set of gender-neutral occupations and INLINEFORM1 is the size of the gender pairs set. For example, INLINEFORM2 is the softmax probability of the word INLINEFORM3 where the seed sequence is He is a. The second set of templates like below, aims to capture how the probabilities of gendered words depend on the occupation words in the seed. INLINEFORM4",
"Our debiasing approach does not explicitly address the bias in the embedding layer. Therefore, we use gender-neutral occupations to measure the embedding bias to observe if debiasing the output layer also decreases the bias in the embedding. We define the embedding bias, INLINEFORM0 , as the difference between the Euclidean distance of an occupation word to male words and the distance of the occupation word to the female counterparts. This definition is equivalent to bias by projection described by BIBREF6 . We define INLINEFORM1 as INLINEFORM2",
"where INLINEFORM0 is a set of gender-neutral occupations, INLINEFORM1 is the size of the gender pairs set and INLINEFORM2 is the word-to-vector dictionary."
],
"extractive_spans": [
"gender bias",
"normalized version of INLINEFORM0",
"ratio of occurrence of male and female words in the model generated text",
"Causal occupation bias conditioned on occupation",
"causal occupation bias conditioned on gender",
"INLINEFORM1"
],
"free_form_answer": "",
"highlighted_evidence": [
"Using the definition of gender bias similar to the one used by BIBREF5 , we define gender bias as INLINEFORM0\n\nwhere INLINEFORM0 is a set of gender-neutral words, and INLINEFORM1 is the occurrences of a word INLINEFORM2 with words of gender INLINEFORM3 in the same window. ",
"We also evaluate a normalized version of INLINEFORM0 which we denote by conditional co-occurrence bias, INLINEFORM1 . ",
"We report the ratio of occurrence of male and female words in the model generated text, INLINEFORM1 , as INLINEFORM2",
"Causal occupation bias conditioned on occupation is represented as INLINEFORM0",
"We measure causal occupation bias conditioned on gender as INLINEFORM0\n\nwhere INLINEFORM0 is a set of gender-neutral occupations and INLINEFORM1 is the size of the gender pairs set.",
"We define INLINEFORM1 as INLINEFORM2\n\nwhere INLINEFORM0 is a set of gender-neutral occupations, INLINEFORM1 is the size of the gender pairs set and INLINEFORM2 is the word-to-vector dictionary."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
}
],
"nlp_background": [
"",
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
"",
""
],
"question": [
"which existing strategies are compared?",
"what dataset was used?",
"what kinds of male and female words are looked at?",
"how is mitigation of gender bias evaluated?",
"what bias evaluation metrics are used?"
],
"question_id": [
"c59078efa7249acfb9043717237c96ae762c0a8c",
"73bddaaf601a4f944a3182ca0f4de85a19cdc1d2",
"d4e5e3f37679ff68914b55334e822ea18e60a6cf",
"5f60defb546f35d25a094ff34781cddd4119e400",
"90d946ccc3abf494890e147dd85bd489b8f3f0e8"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
"",
""
]
} | {
"caption": [
"Table 1: Example templates of two types of occupation bias",
"Table 2: Evaluation results for models trained on Daily Mail and their generated texts"
],
"file": [
"4-Table1-1.png",
"5-Table2-1.png"
]
} | [
"how is mitigation of gender bias evaluated?"
] | [
[
"1905.12801-Results-0"
]
] | [
"Using INLINEFORM0 and INLINEFORM1"
] | 128 |
1810.12196 | ReviewQA: a relational aspect-based opinion reading dataset | Deep reading models for question-answering have demonstrated promising performance over the last couple of years. However current systems tend to learn how to cleverly extract a span of the source document, based on its similarity with the question, instead of seeking for the appropriate answer. Indeed, a reading machine should be able to detect relevant passages in a document regarding a question, but more importantly, it should be able to reason over the important pieces of the document in order to produce an answer when it is required. To motivate this purpose, we present ReviewQA, a question-answering dataset based on hotel reviews. The questions of this dataset are linked to a set of relational understanding competencies that we expect a model to master. Indeed, each question comes with an associated type that characterizes the required competency. With this framework, it is possible to benchmark the main families of models and to get an overview of what are the strengths and the weaknesses of a given model on the set of tasks evaluated in this dataset. Our corpus contains more than 500.000 questions in natural language over 100.000 hotel reviews. Our setup is projective, the answer of a question does not need to be extracted from a document, like in most of the recent datasets, but selected among a set of candidates that contains all the possible answers to the questions of the dataset. Finally, we present several baselines over this dataset. | {
"paragraphs": [
[
"A large majority of the human knowledge is recorded through text documents. That is why ability for a system to automatically infer information from text without any structured data has become a major challenge. Answering questions about a given document is a relevant proxy task that has been proposed as a way to evaluate the reading ability of a given model. In this configuration, a text document such as a news article, a document from Wikipedia or any type of text is presented to a machine with an associated set of questions. The system is then expected to answer these questions and evaluated by its accuracy on this task. The machine reading framework is very general and we can imagine a large panel of questions that can possibly handle most of the standard natural language processing tasks. For example, the task of named entities recognition can be formulated as a machine reading one where your document is the sentence and the question would be 'What are the named entities mentioned in this sentence?'. These natural language interactions are an important objective for reading systems.",
"Recently, many datasets have been proposed to build and evaluate reading models BIBREF0 , BIBREF1 . From cloze style questions BIBREF2 to open questions BIBREF3 , from synthetic data BIBREF4 to human written articles BIBREF5 , many styles of documents and questions have been proposed to challenge reading models. The correct answer to the questions proposed in most of these datasets is a span of text of the source document, which can be restricted to a single word in several cases. It means that the answer should explicitly be present in the source document and that the model should be able to locate it.",
"Different models have already shown superhuman performance on several of these datasets and particularly on the SQuAD dataset composed of Wikipedia articles BIBREF6 , BIBREF7 . However, some limits of such models have been highlighted when they encounter perturbations into the input documents BIBREF8 . Indeed almost all of the state of the art models on the SQuAD dataset suffer from a lack of robustness against adversarial examples. Once the model is trained, a meaningless sentence added at the end of the text document can completely disturb the reading system. Conversely, these adversarial examples do not seem to fool a human reader who will be capable of answering the questions as well as without this perturbation. One possible explanation of this phenomenon is that computers are good at extracting patterns in the document that match the representation of the question. If multiple spans of the documents look similar to the questions, the reader might not be able to decide which one is relevant. Moreover, Wikipedia articles tend to be written with the same standard writing style, factual, unambiguous. Such writing style tends to favor the pattern matching between the questions and the documents. This format of documents/questions has certainly influenced the design of the comprehension models that have been proposed so far. Most of them are composed of stacked attention layers that match question and document representations.",
"Following concepts proposed in the 20 bAbI tasks BIBREF4 or in the visual question-answering dataset CLEVR BIBREF9 , we think that the challenge, limited to the detection of relevant passages in a document, is only the first step in building systems that truly understand text. The second step is the ability of reasoning with the relevant information extracted from a document. To set up this challenge, we propose to leverage on a hotel reviews corpus that requires reasoning skills to answer natural language questions. The reviews we used have been extracted from TripAdvisor and originally proposed in BIBREF10 , BIBREF11 . In the original data, each review comes with a set of rated aspects among the seventh available: Business service, Check in / Front Desk, Cleanliness, Location, Room, Sleep Quality, Value and for all the reviews an Overall rating. In this articles we propose to exploit these data to create a dataset of question-answering that will challenge 8 competencies of the reader.",
"Our contributions can be summarized as follow:"
],
[
"ReviewQA is proposed as a novel dataset regarding the collection of the existing ones. Indeed a large panel of available datasets, that evaluate models on different types of documents, can only be valuable for designing efficient models and learning protocols. In this following part, we describe several of these datasets.",
"SQuAD: The Standford Question Answering Dataset (SQuAD) introduced in BIBREF0 is a large dataset of natural questions over the 500 most popular articles of Wikipedia. All the questions have been crowdsourced and answers are spans of text extracted from source documents. This dataset has been very popular these last two years and the performance of the architectures that have been proposed have rapidly increased until several models surpass the human score. Indeed, in the original paper human performance has been measured at 82.304 points for the exact match metric and at the time we are writing this paper four models have already a higher score. In another hand BIBREF8 has shown that these models suffer from a lack of robustness against adversarial examples that are meaningless from a human point of view. This suggests the need for a more challenging dataset that will allow developing strongest reasoning architectures.",
"NewsQA: NewsQA BIBREF1 is a dataset very similar to SQuAD. It contains 120.000 human generated questions over 12.000 articles form CNN originally introduced in BIBREF5 . It has been designed to be more challenging than SQuAD with questions that might require to extract multiple spans of text or not be answerable.",
"WikiHop and MedHop: These are two recent datasets introduced in BIBREF13 . Unlike SQuAD and NewsQA, important facts are spread out across multiple documents and, in order to answer a question, it is necessary to jump over a set of passages to collect the required information. The relevant passages are not explicitly mentioned in the data so this dataset measures the ability that a model has to navigate across multiple documents. The questions come with a set of candidates which are all present in the text.",
"MS Marco: This dataset has been released in BIBREF14 . The documents come from the internet and the questions are real user queries asked through the bing search engine. The dataset contains around 100.000 queries and each of them comes with a set of approximatively 10 relevant passages. Like in SQuAD, several models are already doing superhuman performances on this dataset.",
"Facebook bAbI tasks: This is a set of 20 toy tasks proposed in BIBREF4 and designed to measure text understanding. Each task requires a certain capability to be completed like induction, deduction and more. Documents are synthetic stories, composed of few sentences that describe a set of actions. This dataset was one of the first attempt to introduce a general set of prerequisite capabilities required for the reading task. Although it has been a very challenging framework, beneficial to the emergence of the attention mechanism inside the reading architectures, a Gated end-to-end memory network BIBREF15 now succeed in almost all of the 20 tasks. One of the possible reason is that the data are synthetic data, without noise or ambiguity. We propose a comparable framework with understanding and reasoning tasks based on user-generated comments that are much more realistic and that required language competencies to be understood.",
"CLEVR: Beyond textual question-answering, Visual Question-Answering (VQA) has been largely studied during the last couple of years. More recently, the problem of relational reasoning has been introduced through this dataset BIBREF9 . The main original idea was to introduce relational reasoning questions over object shapes and placements. This dataset has already motivated the development of original deep models. To the best of our knowledge, no natural language question-answering corpus has been designed to investigate such capabilities. As we will present in the following of this paper, we think sentiment analysis is particularly suited for this task and we will introduce a novel machine reading corpus with such capability requirements."
],
[
"Sentiment analysis is one of the historical tasks of Natural Language Processing. It is an important challenge for companies, restaurants, hotels that aim to analyze customer satisfaction regarding products and quality of services. Given a text document, the objective is to predict its overall polarity. Generally, it can be positive, negative or neutral. This analysis gives a quick overview of a general sentiment over a set of documents, but this framework tends to be restrictive. Indeed, one document tends to express multiple opinions of different aspects. For instance, in the sentence: The fish was very good but the service was terrible, there is not a general dominant sentiment, and a finer analysis is needed. The task of aspect-based sentiment analysis aims to predict a polarity of a sentence regarding a given aspect. In the previous example a positive polarity should be associated to the aspect food, and on the contrary, a negative sentiment is expressed regarding the quality of the service.",
"The idea of using models originally designed for question-answering, for the sentiment analysis task has been introduced in BIBREF16 , BIBREF17 . In these papers, several adaptations of the end-to-end memory network (MemN2N) BIBREF18 are used to predict the polarity of a review regarding a given aspect. In that configuration, the review is encoded into the memory cells and the controller, usually initialized with a representation of the question, is initialized with a representation of the aspect. The analysis of the attention between the values of the controller and the document has shown interesting results, by highlighting relevant part of a document regarding an aspect."
],
[
"We think that evaluating the task of sentiment analysis through the setup of question-answering is a relevant playground for machine reading research. Indeed natural language questions about the different aspects of the targeted venues are typical kind of questions we want to be able to ask to a system. In this context, we introduce a set of reasoning questions types over the relationships between aspects. We propose ReviewQA, a dataset of natural language questions over hotel reviews. These questions are divided into 8 groups, regarding the competency required to be answered. In this section, we describe each task and the process followed to generate this dataset."
],
[
"We used a set of reviews extracted from the TripAdvisor website and originally proposed in BIBREF10 and BIBREF11 . This corpus is available at http://www.cs.virginia.edu/~hw5x/Data/LARA/TripAdvisor/TripAdvisorJson.tar.bz2. Each review comes with the name of the associated hotel, a title, an overall rating, a comment and a list of rated aspects. From 0 to 7 aspects, among value, room, location, cleanliness, check-in/front desk, service, business service, can possibly be rated in a review. Figure FIGREF8 displays a review extracted from this dataset."
],
[
"Objective: Starting with the original corpus, we aim at building a machine reading task where natural language questions will challenge the model on its understanding of the reviews. Indeed learning relational reasoning competencies over natural language documents is a major challenge of the current reading models. These original raw data allow us to generate relational questions that can possibly require a global understanding of the comment and reasoning skills to be treated. For example, asking a question like What is the best aspect rated in this comment ? is not an easy question that can be answered without a deep understanding of the review. It is necessary to capture all the aspects mentioned in the text, to predict their rating and finally to select the best one. The tasks and the dataset we propose are publicly available at http://www.europe.naverlabs.com/Blog/ReviewQA-A-novel-relational-aspect-based-opinion-dataset-for-machine-reading",
"We introduce a list of 8 different competencies that a reading system should master in order to process reviews and text documents in general. These 8 tasks require different competencies and a different level of understanding of the document to be well answered. For instance, detecting if an aspect is mentioned in a review will require less understanding of the review than predicting explicitly the rating of this aspect. Table TABREF10 presents the 8 tasks we have introduced in this dataset with an example of a question that corresponds to each task. We also provide the expected type of the answer (Yes/No question, rating question...). It can be an additional tool to analyze the errors of the readers."
],
[
"We sample 100.000 reviews from the original corpus. Figure FIGREF12 presents the distribution of the number of words of the reviews in the dataset. We explicitly favor reviews which contain an important number of words. In average, a review contains 200 words. Indeed these long reviews are most likely to contain challenging relations between different aspects. A short review which deals with only a few aspects is more likely to not be very relevant to the challenge we want to propose in this dataset. Figure FIGREF14 displays the distribution of the ratings per aspects in the 100.000 reviews we based our dataset. We can see that the average values of these ratings tend to be quite high. It could have introduced bias if it was not the case for all the aspects. For example, we do not want that the model learns that in general, the service is rated better than the location and them answer without looking at the document. Since this situation is the same for all the aspects, the relational tasks introduced in this dataset remains extremely relevant.",
"Then we randomly select 6 tasks for each review (the same task can be selected multiple times) and randomly select a natural language question that corresponds to this task. The questions are human-generated patterns that we have crowdsourced in order to produce a dataset as rich as possible. To this end, we have generated several patterns that correspond to the capabilities we wanted to express in a given question and we have crowdsourced rephrasing of these patterns.",
"The final dataset we propose is composed of more than 500.000 questions about 100.000 reviews. Table TABREF13 shows the repartition of the documents and queries into the train and test set. Each review contains a maximum of 6 questions. Sometimes less when it is not possible to generate all. For example, if only two or three aspects are mentioned in a review, we will be able to generate only a little set of relational questions. Figure FIGREF15 depicts the repartition of the answers in the generated dataset. A majority of the tasks we introduced, even if they possibly require a high level of understanding of the document and the question, are binary questions. It means that in the generated dataset the answers yes and no tend to be more present than the others. To balance in a better way the distribution of the answers, we chose to affect a higher probability of sampling to the task 5, 6, 7.1, 8. Indeed, these tasks are not binary questions and required an aspect name as the answer. Figure FIGREF17 represents the repartition of question types in our dataset. Finally, figure FIGREF15 shows the repartition of the answers in the dataset."
],
[
"In order to generate more paraphrases of the questions, we used a backtranslation method to enrich them. The idea is to use a translation model that will translate our human-generated questions into another language, and then translate them back to English. This double translation will introduce rewordings of the questions that we will be able to integrate into this dataset. This approach has been used in BIBREF7 to perform data augmentation on the training set. For this purpose, we have trained a fairseq BIBREF19 model to translate sentences from English to French and for French to English. In order to preserve the quality of the sentences we have so far, we only keep the most probable translation of each original sentence. Indeed a beam search is used during the translation to predict the most probable translations which mean that we each translation comes with an associated probability. By selecting only the first translations, we almost double the number of questions without degrading the quality of the questions proposed in the dataset."
],
[
"In this section, we present the performance of four different models on our dataset: a logistic regression and three neural models. The first one is a basic LSTM BIBREF20 , the second a MemN2N BIBREF18 and the third one is a model of our own design. This fourth model reuses the encoding layers of the R-net BIBREF12 and we modify the final layers with a projection layer that will be able to select the answer among the set of candidates instead of pointing the answerer directly into the source document.",
"Logistic regression: To produce the representation of the input, we concatenate the Bag-Of-Words representation of the document with the Bag-Of-Words representation of the question. It produces an array of size INLINEFORM0 where INLINEFORM1 is the vocabulary size. Then we use a logistic regression to select the most probable answer among the INLINEFORM2 possibilities.",
"LSTM: We start with a concatenation of the sequence of indexes of the document with the sequence of indexes of the question. Them we feed an LSTM network with this vector and use the final state as the representation of the input. Finally, we apply a logistic regression over this representation to produce the final decision.",
"End-to-end memory networks: This architecture is based on two different memory cells (input and output) that contain a representation of the document. A controller, initialized with the encoding of the question, is used to calculate an attention between this controller and the representation of the document in the input memory. This attention is them used to re-weight the representation of the document in the output memory. This response from the output memory is them utilized to update the controller. After that, either a matrix is used to project this representation into the answer space either the controller is used to go through an over hop of memory. This architecture allows the model to sequentially look into the initial document seeking for important information regarding the current state of its controller. This model achieves very good performances on the 20 bAbI tasks dataset.",
"Deep projective reader: This is a model of our own design, largely inspired by the efficient R-net reader BIBREF12 . The overall architecture is composed of 4 stacked layers: an encoding layer, a question/document attention, a self-attention layer and a projection layer. The following paragraphs briefly describe the overall utility of each of these layers.",
"Encoding: The sentence is tokenized by words. Each token is represented by the concatenation of its embedding vector and the final state of a bidirectional recurrent network over the characters of this word. Finally, another bidirectional RNN on the top of this representation produce the encoding of the document and the question.",
"Question/document attention: We apply a question/document attention layer that matches the representation of the question with each token of the document individually to output an attention that gives more weight to the important tokens of the document regarding the question.",
"Self-attention layer: The previous layer has built a question-aware representation of the document. One problem with such representation is that form the moment each token has only a good knowledge of its closest neighbors. To tackle this problem, BIBREF12 have proposed to use a self-attention layer that matches each individual token with all the other tokens of the document. Doing that, each token is now aware of a larger context.",
"Output layer: A bidirectional RNN is applied on the top of the last layer and we use its final state as the representation of the input. We use a projection matrix to project this representation into the answer space and select the most probable one"
],
[
"We propose to train these models on the entire set of tasks and them to measure the overall performance and the accuracy of each individual task. In all the models, we use the Adam optimizer BIBREF21 with a learning rate of 0.01 and the batch size is set to 64. All the parameter are initialized from a Gaussian distribution with mean 0 and a standard deviation of 0.01. The dimension of the word embeddings in the projective deep reading model and the LSTM model is 300 and we use Glove pre-trained vectors ( BIBREF22 ). We use a MemN2N with 5 memory hops and a linear start of 5 epochs. The reviews are split by sentence and each memory block corresponds to one sentence. Each sentence is represented by its bag-of-word representation augmented with temporal encoding as it is suggested in BIBREF18 ."
],
[
"Table TABREF19 displays the performance of the 4 baselines on the ReviewQA's test set. These results are the performance achieved by our own implementation of these 4 models. According to our results, the simple LSTM network and the MemN2N perform very poorly on this dataset. Especially on the most advanced reasoning tasks. Indeed, the task 5 which corresponds to the prediction of the exact rating of an aspect seems to be very challenging for these model. Maybe the tokenization by sentence to create the memory blocks of the MemN2N, which is appropriated in the case of the bAbI tasks, is not a good representation of the documents when it has to handle human generated comments. However, the logistic regression achieves reasonable performance on these tasks, and do not suffer from catastrophic performance on any tasks. Its worst result comes on task 6 and one of the reason is probably that this architecture is not designed to predict a list of answers. On the contrary, the deep projective reader achieves encouraging on this dataset. It outperforms all the other baselines, with very good scores on the first fourth tasks. The question/document and document/document attention layers proposed in BIBREF12 seem once again to produce rich encodings of the inputs which are relevant for our projection layer."
],
[
"In this paper, we formalize the sentiment analysis task through the framework of machine reading and release ReviewQA, a relational question-answering corpus. This dataset allows evaluating a set of relational reasoning skills through natural language questions. It is composed of a large panel of human-generated questions. Moreover, we propose to augment the dataset with backtranslated reformulations of these questions. Finally, we evaluate 4 models on this dataset, including a projective model of our own design that seems to be a strong baseline for this dataset. We expect that this large dataset will encourage the research community to develop reasoning models and evaluate their models on this set of tasks."
],
[
"We thank Vassilina Nikoulina and Stéphane Clinchant for the help regarding the backtranslation rewording of the questions."
]
],
"section_name": [
"Introduction",
"Machine comprehension datasets",
"Attention-based models for aspect-based sentiment analysis",
"ReviewQA dataset",
"Original data",
"Relational reasoning competencies",
"Construction of the dataset",
"Paraphrase augmentation using backtranslation",
"Models",
"Training details",
"Model performance",
"Conclusion",
"Acknowledgment"
]
} | {
"answers": [
{
"annotation_id": [
"137e96297a76f6a653d316a41f70c569891be9e7"
],
"answer": [
{
"evidence": [
"We introduce a list of 8 different competencies that a reading system should master in order to process reviews and text documents in general. These 8 tasks require different competencies and a different level of understanding of the document to be well answered. For instance, detecting if an aspect is mentioned in a review will require less understanding of the review than predicting explicitly the rating of this aspect. Table TABREF10 presents the 8 tasks we have introduced in this dataset with an example of a question that corresponds to each task. We also provide the expected type of the answer (Yes/No question, rating question...). It can be an additional tool to analyze the errors of the readers."
],
"extractive_spans": [
"These 8 tasks require different competencies and a different level of understanding of the document to be well answered"
],
"free_form_answer": "",
"highlighted_evidence": [
"We introduce a list of 8 different competencies that a reading system should master in order to process reviews and text documents in general. These 8 tasks require different competencies and a different level of understanding of the document to be well answered. For instance, detecting if an aspect is mentioned in a review will require less understanding of the review than predicting explicitly the rating of this aspect. Table TABREF10 presents the 8 tasks we have introduced in this dataset with an example of a question that corresponds to each task."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"4fb5aca8f96a5927f7fdecdde38bf3e7aedfff9c",
"8febaed24c8afa73ee27ab866d7f74785e0fd426"
],
"answer": [
{
"evidence": [
"Logistic regression: To produce the representation of the input, we concatenate the Bag-Of-Words representation of the document with the Bag-Of-Words representation of the question. It produces an array of size INLINEFORM0 where INLINEFORM1 is the vocabulary size. Then we use a logistic regression to select the most probable answer among the INLINEFORM2 possibilities.",
"LSTM: We start with a concatenation of the sequence of indexes of the document with the sequence of indexes of the question. Them we feed an LSTM network with this vector and use the final state as the representation of the input. Finally, we apply a logistic regression over this representation to produce the final decision.",
"End-to-end memory networks: This architecture is based on two different memory cells (input and output) that contain a representation of the document. A controller, initialized with the encoding of the question, is used to calculate an attention between this controller and the representation of the document in the input memory. This attention is them used to re-weight the representation of the document in the output memory. This response from the output memory is them utilized to update the controller. After that, either a matrix is used to project this representation into the answer space either the controller is used to go through an over hop of memory. This architecture allows the model to sequentially look into the initial document seeking for important information regarding the current state of its controller. This model achieves very good performances on the 20 bAbI tasks dataset.",
"Deep projective reader: This is a model of our own design, largely inspired by the efficient R-net reader BIBREF12 . The overall architecture is composed of 4 stacked layers: an encoding layer, a question/document attention, a self-attention layer and a projection layer. The following paragraphs briefly describe the overall utility of each of these layers."
],
"extractive_spans": [
"Logistic regression",
"LSTM",
"End-to-end memory networks",
"Deep projective reader"
],
"free_form_answer": "",
"highlighted_evidence": [
"Logistic regression: To produce the representation of the input, we concatenate the Bag-Of-Words representation of the document with the Bag-Of-Words representation of the question.",
"LSTM: We start with a concatenation of the sequence of indexes of the document with the sequence of indexes of the question. Them we feed an LSTM network with this vector and use the final state as the representation of the input. Finally, we apply a logistic regression over this representation to produce the final decision.",
"End-to-end memory networks: This architecture is based on two different memory cells (input and output) that contain a representation of the document.",
"Deep projective reader: This is a model of our own design, largely inspired by the efficient R-net reader BIBREF12 ."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In this section, we present the performance of four different models on our dataset: a logistic regression and three neural models. The first one is a basic LSTM BIBREF20 , the second a MemN2N BIBREF18 and the third one is a model of our own design. This fourth model reuses the encoding layers of the R-net BIBREF12 and we modify the final layers with a projection layer that will be able to select the answer among the set of candidates instead of pointing the answerer directly into the source document.",
"Logistic regression: To produce the representation of the input, we concatenate the Bag-Of-Words representation of the document with the Bag-Of-Words representation of the question. It produces an array of size INLINEFORM0 where INLINEFORM1 is the vocabulary size. Then we use a logistic regression to select the most probable answer among the INLINEFORM2 possibilities.",
"LSTM: We start with a concatenation of the sequence of indexes of the document with the sequence of indexes of the question. Them we feed an LSTM network with this vector and use the final state as the representation of the input. Finally, we apply a logistic regression over this representation to produce the final decision.",
"End-to-end memory networks: This architecture is based on two different memory cells (input and output) that contain a representation of the document. A controller, initialized with the encoding of the question, is used to calculate an attention between this controller and the representation of the document in the input memory. This attention is them used to re-weight the representation of the document in the output memory. This response from the output memory is them utilized to update the controller. After that, either a matrix is used to project this representation into the answer space either the controller is used to go through an over hop of memory. This architecture allows the model to sequentially look into the initial document seeking for important information regarding the current state of its controller. This model achieves very good performances on the 20 bAbI tasks dataset.",
"Deep projective reader: This is a model of our own design, largely inspired by the efficient R-net reader BIBREF12 . The overall architecture is composed of 4 stacked layers: an encoding layer, a question/document attention, a self-attention layer and a projection layer. The following paragraphs briefly describe the overall utility of each of these layers."
],
"extractive_spans": [
"Logistic regression",
"LSTM",
"End-to-end memory networks",
"Deep projective reader"
],
"free_form_answer": "",
"highlighted_evidence": [
"In this section, we present the performance of four different models on our dataset: a logistic regression and three neural models. ",
"Logistic regression: To produce the representation of the input, we concatenate the Bag-Of-Words representation of the document with the Bag-Of-Words representation of the question. ",
"LSTM: We start with a concatenation of the sequence of indexes of the document with the sequence of indexes of the question. Them we feed an LSTM network with this vector and use the final state as the representation of the input. Finally, we apply a logistic regression over this representation to produce the final decision.\n\n",
"End-to-end memory networks: This architecture is based on two different memory cells (input and output) that contain a representation of the document. ",
"Deep projective reader: This is a model of our own design, largely inspired by the efficient R-net reader BIBREF12 ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"52c16ac09ea6d0f37daf1f5d6b79c3b3128d79bf",
"8cdc2a554963786f4b7cd5a67ae67239285228e4"
],
"answer": [
{
"evidence": [
"Table TABREF19 displays the performance of the 4 baselines on the ReviewQA's test set. These results are the performance achieved by our own implementation of these 4 models. According to our results, the simple LSTM network and the MemN2N perform very poorly on this dataset. Especially on the most advanced reasoning tasks. Indeed, the task 5 which corresponds to the prediction of the exact rating of an aspect seems to be very challenging for these model. Maybe the tokenization by sentence to create the memory blocks of the MemN2N, which is appropriated in the case of the bAbI tasks, is not a good representation of the documents when it has to handle human generated comments. However, the logistic regression achieves reasonable performance on these tasks, and do not suffer from catastrophic performance on any tasks. Its worst result comes on task 6 and one of the reason is probably that this architecture is not designed to predict a list of answers. On the contrary, the deep projective reader achieves encouraging on this dataset. It outperforms all the other baselines, with very good scores on the first fourth tasks. The question/document and document/document attention layers proposed in BIBREF12 seem once again to produce rich encodings of the inputs which are relevant for our projection layer."
],
"extractive_spans": [
"ReviewQA's test set"
],
"free_form_answer": "",
"highlighted_evidence": [
"Table TABREF19 displays the performance of the 4 baselines on the ReviewQA's test set. These results are the performance achieved by our own implementation of these 4 models."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 1: Descriptions and examples of the 8 tasks evaluated in ReviewQA.",
"We introduce a list of 8 different competencies that a reading system should master in order to process reviews and text documents in general. These 8 tasks require different competencies and a different level of understanding of the document to be well answered. For instance, detecting if an aspect is mentioned in a review will require less understanding of the review than predicting explicitly the rating of this aspect. Table TABREF10 presents the 8 tasks we have introduced in this dataset with an example of a question that corresponds to each task. We also provide the expected type of the answer (Yes/No question, rating question...). It can be an additional tool to analyze the errors of the readers."
],
"extractive_spans": [],
"free_form_answer": "Detection of an aspect in a review, Prediction of the customer general satisfaction, Prediction of the global trend of an aspect in a given review, Prediction of whether the rating of a given aspect is above or under a given value, Prediction of the exact rating of an aspect in a review, Prediction of the list of all the positive/negative aspects mentioned in the review, Comparison between aspects, Prediction of the strengths and weaknesses in a review",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Descriptions and examples of the 8 tasks evaluated in ReviewQA.",
"Table TABREF10 presents the 8 tasks we have introduced in this dataset with an example of a question that corresponds to each task."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"58c5dd98892a2fd55aeba925c736e938ad8c883f",
"8a233c61d40f8f5ea0224da7a983559f49e33318"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [
"In order to generate more paraphrases of the questions, we used a backtranslation method to enrich them. The idea is to use a translation model that will translate our human-generated questions into another language, and then translate them back to English. This double translation will introduce rewordings of the questions that we will be able to integrate into this dataset. This approach has been used in BIBREF7 to perform data augmentation on the training set. For this purpose, we have trained a fairseq BIBREF19 model to translate sentences from English to French and for French to English. In order to preserve the quality of the sentences we have so far, we only keep the most probable translation of each original sentence. Indeed a beam search is used during the translation to predict the most probable translations which mean that we each translation comes with an associated probability. By selecting only the first translations, we almost double the number of questions without degrading the quality of the questions proposed in the dataset."
],
"extractive_spans": [
"English"
],
"free_form_answer": "",
"highlighted_evidence": [
"In order to generate more paraphrases of the questions, we used a backtranslation method to enrich them. The idea is to use a translation model that will translate our human-generated questions into another language, and then translate them back to English. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"a8beaa60bd66c3b6b8d2b2af4f7474f95aa5ed75",
"bfa6aaca5266e7ad31b955d44284f18a0991bb76"
],
"answer": [
{
"evidence": [
"Following concepts proposed in the 20 bAbI tasks BIBREF4 or in the visual question-answering dataset CLEVR BIBREF9 , we think that the challenge, limited to the detection of relevant passages in a document, is only the first step in building systems that truly understand text. The second step is the ability of reasoning with the relevant information extracted from a document. To set up this challenge, we propose to leverage on a hotel reviews corpus that requires reasoning skills to answer natural language questions. The reviews we used have been extracted from TripAdvisor and originally proposed in BIBREF10 , BIBREF11 . In the original data, each review comes with a set of rated aspects among the seventh available: Business service, Check in / Front Desk, Cleanliness, Location, Room, Sleep Quality, Value and for all the reviews an Overall rating. In this articles we propose to exploit these data to create a dataset of question-answering that will challenge 8 competencies of the reader."
],
"extractive_spans": [
"TripAdvisor"
],
"free_form_answer": "",
"highlighted_evidence": [
"The reviews we used have been extracted from TripAdvisor and originally proposed in BIBREF10 , BIBREF11 . "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Following concepts proposed in the 20 bAbI tasks BIBREF4 or in the visual question-answering dataset CLEVR BIBREF9 , we think that the challenge, limited to the detection of relevant passages in a document, is only the first step in building systems that truly understand text. The second step is the ability of reasoning with the relevant information extracted from a document. To set up this challenge, we propose to leverage on a hotel reviews corpus that requires reasoning skills to answer natural language questions. The reviews we used have been extracted from TripAdvisor and originally proposed in BIBREF10 , BIBREF11 . In the original data, each review comes with a set of rated aspects among the seventh available: Business service, Check in / Front Desk, Cleanliness, Location, Room, Sleep Quality, Value and for all the reviews an Overall rating. In this articles we propose to exploit these data to create a dataset of question-answering that will challenge 8 competencies of the reader."
],
"extractive_spans": [
"TripAdvisor"
],
"free_form_answer": "",
"highlighted_evidence": [
" The reviews we used have been extracted from TripAdvisor and originally proposed in BIBREF10 , BIBREF11 ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"",
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
"",
""
],
"question": [
"What kind of questions are present in the dataset?",
"What baselines are presented?",
"What tasks were evaluated?",
"What language are the reviews in?",
"Where are the hotel reviews from?"
],
"question_id": [
"b962cc817a4baf6c56150f0d97097f18ad6cd9ed",
"fb5fb11e7d01b9f9efe3db3417b8faf4f8d6931f",
"52f8a3e3cd5d42126b5307adc740b71510a6bdf5",
"2236386729105f5cf42f73cc055ce3acdea2d452",
"18942ab8c365955da3fd8fc901dfb1a3b65c1be1"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
"",
""
]
} | {
"caption": [
"Figure 2: Number of words per review.",
"Figure 1: An example from the original dataset.",
"Table 1: Descriptions and examples of the 8 tasks evaluated in ReviewQA.",
"Figure 3: Distribution of the ratings per aspect.",
"Figure 4: Distribution of answers in the generated dataset.",
"Table 2: Repartition of the questions into the train and test set.",
"Figure 5: Repartition of the tasks in the generated dataset.",
"Table 3: Accuracy (%) of 4 models on the ReviewQA dataset."
],
"file": [
"4-Figure2-1.png",
"4-Figure1-1.png",
"5-Table1-1.png",
"6-Figure3-1.png",
"6-Figure4-1.png",
"6-Table2-1.png",
"7-Figure5-1.png",
"8-Table3-1.png"
]
} | [
"What tasks were evaluated?"
] | [
[
"1810.12196-5-Table1-1.png",
"1810.12196-Model performance-0",
"1810.12196-Relational reasoning competencies-1"
]
] | [
"Detection of an aspect in a review, Prediction of the customer general satisfaction, Prediction of the global trend of an aspect in a given review, Prediction of whether the rating of a given aspect is above or under a given value, Prediction of the exact rating of an aspect in a review, Prediction of the list of all the positive/negative aspects mentioned in the review, Comparison between aspects, Prediction of the strengths and weaknesses in a review"
] | 129 |
1707.05236 | Artificial Error Generation with Machine Translation and Syntactic Patterns | Shortage of available training data is holding back progress in the area of automated error detection. This paper investigates two alternative methods for artificially generating writing errors, in order to create additional resources. We propose treating error generation as a machine translation task, where grammatically correct text is translated to contain errors. In addition, we explore a system for extracting textual patterns from an annotated corpus, which can then be used to insert errors into grammatically correct sentences. Our experiments show that the inclusion of artificially generated errors significantly improves error detection accuracy on both FCE and CoNLL 2014 datasets. | {
"paragraphs": [
[
"Writing errors can occur in many different forms – from relatively simple punctuation and determiner errors, to mistakes including word tense and form, incorrect collocations and erroneous idioms. Automatically identifying all of these errors is a challenging task, especially as the amount of available annotated data is very limited. Rei2016 showed that while some error detection algorithms perform better than others, it is additional training data that has the biggest impact on improving performance.",
"Being able to generate realistic artificial data would allow for any grammatically correct text to be transformed into annotated examples containing writing errors, producing large amounts of additional training examples. Supervised error generation systems would also provide an efficient method for anonymising the source corpus – error statistics from a private corpus can be aggregated and applied to a different target text, obscuring sensitive information in the original examination scripts. However, the task of creating incorrect data is somewhat more difficult than might initially appear – naive methods for error generation can create data that does not resemble natural errors, thereby making downstream systems learn misleading or uninformative patterns.",
"Previous work on artificial error generation (AEG) has focused on specific error types, such as prepositions and determiners BIBREF0 , BIBREF1 , or noun number errors BIBREF2 . Felice2014a investigated the use of linguistic information when generating artificial data for error correction, but also restricting the approach to only five error types. There has been very limited research on generating artificial data for all types, which is important for general-purpose error detection systems. For example, the error types investigated by Felice2014a cover only 35.74% of all errors present in the CoNLL 2014 training dataset, providing no additional information for the majority of errors.",
"In this paper, we investigate two supervised approaches for generating all types of artificial errors. We propose a framework for generating errors based on statistical machine translation (SMT), training a model to translate from correct into incorrect sentences. In addition, we describe a method for learning error patterns from an annotated corpus and transplanting them into error-free text. We evaluate the effect of introducing artificial data on two error detection benchmarks. Our results show that each method provides significant improvements over using only the available training set, and a combination of both gives an absolute improvement of 4.3% in INLINEFORM0 , without requiring any additional annotated data.",
" "
],
[
"We investigate two alternative methods for AEG. The models receive grammatically correct text as input and modify certain tokens to produce incorrect sequences. The alternative versions of each sentence are aligned using Levenshtein distance, allowing us to identify specific words that need to be marked as errors. While these alignments are not always perfect, we found them to be sufficient for practical purposes, since alternative alignments of similar sentences often result in the same binary labeling. Future work could explore more advanced alignment methods, such as proposed by felice-bryant-briscoe.",
"In Section SECREF4 , this automatically labeled data is then used for training error detection models."
],
[
"We treat AEG as a translation task – given a correct sentence as input, the system would learn to translate it to contain likely errors, based on a training corpus of parallel data. Existing SMT approaches are already optimised for identifying context patterns that correspond to specific output sequences, which is also required for generating human-like errors. The reverse of this idea, translating from incorrect to correct sentences, has been shown to work well for error correction tasks BIBREF2 , BIBREF3 , and round-trip translation has also been shown to be promising for correcting grammatical errors BIBREF4 .",
"Following previous work BIBREF2 , BIBREF5 , we build a phrase-based SMT error generation system. During training, error-corrected sentences in the training data are treated as the source, and the original sentences written by language learners as the target. Pialign BIBREF6 is used to create a phrase translation table directly from model probabilities. In addition to default features, we add character-level Levenshtein distance to each mapping in the phrase table, as proposed by Felice:2014-CoNLL. Decoding is performed using Moses BIBREF7 and the language model used during decoding is built from the original erroneous sentences in the learner corpus. The IRSTLM Toolkit BIBREF8 is used for building a 5-gram language model with modified Kneser-Ney smoothing BIBREF9 ."
],
[
"We also describe a method for AEG using patterns over words and part-of-speech (POS) tags, extracting known incorrect sequences from a corpus of annotated corrections. This approach is based on the best method identified by Felice2014a, using error type distributions; while they covered only 5 error types, we relax this restriction and learn patterns for generating all types of errors.",
"The original and corrected sentences in the corpus are aligned and used to identify short transformation patterns in the form of (incorrect phrase, correct phrase). The length of each pattern is the affected phrase, plus up to one token of context on both sides. If a word form changes between the incorrect and correct text, it is fully saved in the pattern, otherwise the POS tags are used for matching.",
"For example, the original sentence `We went shop on Saturday' and the corrected version `We went shopping on Saturday' would produce the following pattern:",
"(VVD shop_VV0 II, VVD shopping_VVG II)",
"After collecting statistics from the background corpus, errors can be inserted into error-free text. The learned patterns are now reversed, looking for the correct side of the tuple in the input sentence. We only use patterns with frequency INLINEFORM0 , which yields a total of 35,625 patterns from our training data. For each input sentence, we first decide how many errors will be generated (using probabilities from the background corpus) and attempt to create them by sampling from the collection of applicable patterns. This process is repeated until all the required errors have been generated or the sentence is exhausted. During generation, we try to balance the distribution of error types as well as keeping the same proportion of incorrect and correct sentences as in the background corpus BIBREF10 . The required POS tags were generated with RASP BIBREF11 , using the CLAWS2 tagset."
],
[
"We construct a neural sequence labeling model for error detection, following the previous work BIBREF12 , BIBREF13 . The model receives a sequence of tokens as input and outputs a prediction for each position, indicating whether the token is correct or incorrect in the current context. The tokens are first mapped to a distributed vector space, resulting in a sequence of word embeddings. Next, the embeddings are given as input to a bidirectional LSTM BIBREF14 , in order to create context-dependent representations for every token. The hidden states from forward- and backward-LSTMs are concatenated for each word position, resulting in representations that are conditioned on the whole sequence. This concatenated vector is then passed through an additional feedforward layer, and a softmax over the two possible labels (correct and incorrect) is used to output a probability distribution for each token. The model is optimised by minimising categorical cross-entropy with respect to the correct labels. We use AdaDelta BIBREF15 for calculating an adaptive learning rate during training, which accounts for a higher baseline performance compared to previous results."
],
[
"We trained our error generation models on the public FCE training set BIBREF16 and used them to generate additional artificial training data. Grammatically correct text is needed as the starting point for inserting artificial errors, and we used two different sources: 1) the corrected version of the same FCE training set on which the system is trained (450K tokens), and 2) example sentences extracted from the English Vocabulary Profile (270K tokens).. While there are other text corpora that could be used (e.g., Wikipedia and news articles), our development experiments showed that keeping the writing style and vocabulary close to the target domain gives better results compared to simply including more data.",
"We evaluated our detection models on three benchmarks: the FCE test data (41K tokens) and the two alternative annotations of the CoNLL 2014 Shared Task dataset (30K tokens) BIBREF3 . Each artificial error generation system was used to generate 3 different versions of the artificial data, which were then combined with the original annotated dataset and used for training an error detection system. Table TABREF1 contains example sentences from the error generation systems, highlighting each of the edits that are marked as errors.",
"The error detection results can be seen in Table TABREF4 . We use INLINEFORM0 as the main evaluation measure, which was established as the preferred measure for error correction and detection by the CoNLL-14 shared task BIBREF3 . INLINEFORM1 calculates a weighted harmonic mean of precision and recall, which assigns twice as much importance to precision – this is motivated by practical applications, where accurate predictions from an error detection system are more important compared to coverage. For comparison, we also report the performance of the error detection system by Rei2016, trained using the same FCE dataset.",
"The results show that error detection performance is substantially improved by making use of artificially generated data, created by any of the described methods. When comparing the error generation system by Felice2014a (FY14) with our pattern-based (PAT) and machine translation (MT) approaches, we see that the latter methods covering all error types consistently improve performance. While the added error types tend to be less frequent and more complicated to capture, the added coverage is indeed beneficial for error detection. Combining the pattern-based approach with the machine translation system (Ann+PAT+MT) gave the best overall performance on all datasets. The two frameworks learn to generate different types of errors, and taking advantage of both leads to substantial improvements in error detection.",
"We used the Approximate Randomisation Test BIBREF17 , BIBREF18 to calculate statistical significance and found that the improvement for each of the systems using artificial data was significant over using only manual annotation. In addition, the final combination system is also significantly better compared to the Felice2014a system, on all three datasets. While Rei2016 also report separate experiments that achieve even higher performance, these models were trained on a considerably larger proprietary corpus. In this paper we compare error detection frameworks trained on the same publicly available FCE dataset, thereby removing the confounding factor of dataset size and only focusing on the model architectures.",
"The error generation methods can generate alternative versions of the same input text – the pattern-based method randomly samples the error locations, and the SMT system can provide an n-best list of alternative translations. Therefore, we also investigated the combination of multiple error-generated versions of the input files when training error detection models. Figure FIGREF6 shows the INLINEFORM0 score on the development set, as the training data is increased by using more translations from the n-best list of the SMT system. These results reveal that allowing the model to see multiple alternative versions of the same file gives a distinct improvement – showing the model both correct and incorrect variations of the same sentences likely assists in learning a discriminative model."
],
[
"Our work builds on prior research into AEG. Brockett2006 constructed regular expressions for transforming correct sentences to contain noun number errors. Rozovskaya2010a learned confusion sets from an annotated corpus in order to generate preposition errors. Foster2009 devised a tool for generating errors for different types using patterns provided by the user or collected automatically from an annotated corpus. However, their method uses a limited number of edit operations and is thus unable to generate complex errors. Cahill2013 compared different training methodologies and showed that artificial errors helped correct prepositions. Felice2014a learned error type distributions for generating five types of errors, and the system in Section SECREF3 is an extension of this model. While previous work focused on generating a specific subset of error types, we explored two holistic approaches to AEG and showed that they are able to significantly improve error detection performance."
],
[
"This paper investigated two AEG methods, in order to create additional training data for error detection. First, we explored a method using textual patterns learned from an annotated corpus, which are used for inserting errors into correct input text. In addition, we proposed formulating error generation as an MT framework, learning to translate from grammatically correct to incorrect sentences.",
"The addition of artificial data to the training process was evaluated on three error detection annotations, using the FCE and CoNLL 2014 datasets. Making use of artificial data provided improvements for all data generation methods. By relaxing the type restrictions and generating all types of errors, our pattern-based method consistently outperformed the system by Felice2014a. The combination of the pattern-based method with the machine translation approach gave further substantial improvements and the best performance on all datasets."
]
],
"section_name": [
"Introduction",
"Error Generation Methods",
"Machine Translation",
"Pattern Extraction",
"Error Detection Model",
"Evaluation",
"Related Work",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"14fe821ecd8a4a20d30dc7bef012a749d9e1b5a3",
"e80c81d447afde2dc18cb876c034996fce43339c"
],
"answer": [
{
"evidence": [
"The error detection results can be seen in Table TABREF4 . We use INLINEFORM0 as the main evaluation measure, which was established as the preferred measure for error correction and detection by the CoNLL-14 shared task BIBREF3 . INLINEFORM1 calculates a weighted harmonic mean of precision and recall, which assigns twice as much importance to precision – this is motivated by practical applications, where accurate predictions from an error detection system are more important compared to coverage. For comparison, we also report the performance of the error detection system by Rei2016, trained using the same FCE dataset."
],
"extractive_spans": [
"error detection system by Rei2016"
],
"free_form_answer": "",
"highlighted_evidence": [
"For comparison, we also report the performance of the error detection system by Rei2016, trained using the same FCE dataset."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The error detection results can be seen in Table TABREF4 . We use INLINEFORM0 as the main evaluation measure, which was established as the preferred measure for error correction and detection by the CoNLL-14 shared task BIBREF3 . INLINEFORM1 calculates a weighted harmonic mean of precision and recall, which assigns twice as much importance to precision – this is motivated by practical applications, where accurate predictions from an error detection system are more important compared to coverage. For comparison, we also report the performance of the error detection system by Rei2016, trained using the same FCE dataset."
],
"extractive_spans": [
"error detection system by Rei2016"
],
"free_form_answer": "",
"highlighted_evidence": [
"For comparison, we also report the performance of the error detection system by Rei2016, trained using the same FCE dataset."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"f6239f7f10c10c50284f2b5c8d6edd44db0bc334"
],
"answer": [
{
"evidence": [
"The error detection results can be seen in Table TABREF4 . We use INLINEFORM0 as the main evaluation measure, which was established as the preferred measure for error correction and detection by the CoNLL-14 shared task BIBREF3 . INLINEFORM1 calculates a weighted harmonic mean of precision and recall, which assigns twice as much importance to precision – this is motivated by practical applications, where accurate predictions from an error detection system are more important compared to coverage. For comparison, we also report the performance of the error detection system by Rei2016, trained using the same FCE dataset.",
"FLOAT SELECTED: Table 2: Error detection performance when combining manually annotated and artificial training data.",
"The results show that error detection performance is substantially improved by making use of artificially generated data, created by any of the described methods. When comparing the error generation system by Felice2014a (FY14) with our pattern-based (PAT) and machine translation (MT) approaches, we see that the latter methods covering all error types consistently improve performance. While the added error types tend to be less frequent and more complicated to capture, the added coverage is indeed beneficial for error detection. Combining the pattern-based approach with the machine translation system (Ann+PAT+MT) gave the best overall performance on all datasets. The two frameworks learn to generate different types of errors, and taking advantage of both leads to substantial improvements in error detection."
],
"extractive_spans": [],
"free_form_answer": "Combining pattern based and Machine translation approaches gave the best overall F0.5 scores. It was 49.11 for FCE dataset , 21.87 for the first annotation of CoNLL-14, and 30.13 for the second annotation of CoNLL-14. ",
"highlighted_evidence": [
"The error detection results can be seen in Table TABREF4 . We use INLINEFORM0 as the main evaluation measure, which was established as the preferred measure for error correction and detection by the CoNLL-14 shared task BIBREF3 .",
"FLOAT SELECTED: Table 2: Error detection performance when combining manually annotated and artificial training data.",
"The results show that error detection performance is substantially improved by making use of artificially generated data, created by any of the described methods. When comparing the error generation system by Felice2014a (FY14) with our pattern-based (PAT) and machine translation (MT) approaches, we see that the latter methods covering all error types consistently improve performance. While the added error types tend to be less frequent and more complicated to capture, the added coverage is indeed beneficial for error detection. Combining the pattern-based approach with the machine translation system (Ann+PAT+MT) gave the best overall performance on all datasets. The two frameworks learn to generate different types of errors, and taking advantage of both leads to substantial improvements in error detection."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"171aab62d8d15d54fd3b1dcb4cca4b43682f6947",
"58ffd8c3485b1b49521247f5e8640af148c2876f"
],
"answer": [
{
"evidence": [
"For example, the original sentence `We went shop on Saturday' and the corrected version `We went shopping on Saturday' would produce the following pattern:",
"(VVD shop_VV0 II, VVD shopping_VVG II)",
"After collecting statistics from the background corpus, errors can be inserted into error-free text. The learned patterns are now reversed, looking for the correct side of the tuple in the input sentence. We only use patterns with frequency INLINEFORM0 , which yields a total of 35,625 patterns from our training data. For each input sentence, we first decide how many errors will be generated (using probabilities from the background corpus) and attempt to create them by sampling from the collection of applicable patterns. This process is repeated until all the required errors have been generated or the sentence is exhausted. During generation, we try to balance the distribution of error types as well as keeping the same proportion of incorrect and correct sentences as in the background corpus BIBREF10 . The required POS tags were generated with RASP BIBREF11 , using the CLAWS2 tagset."
],
"extractive_spans": [
"(VVD shop_VV0 II, VVD shopping_VVG II)"
],
"free_form_answer": "",
"highlighted_evidence": [
"For example, the original sentence `We went shop on Saturday' and the corrected version `We went shopping on Saturday' would produce the following pattern:\n\n(VVD shop_VV0 II, VVD shopping_VVG II)\n\nAfter collecting statistics from the background corpus, errors can be inserted into error-free text. The learned patterns are now reversed, looking for the correct side of the tuple in the input sentence. We only use patterns with frequency INLINEFORM0 , which yields a total of 35,625 patterns from our training data. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We also describe a method for AEG using patterns over words and part-of-speech (POS) tags, extracting known incorrect sequences from a corpus of annotated corrections. This approach is based on the best method identified by Felice2014a, using error type distributions; while they covered only 5 error types, we relax this restriction and learn patterns for generating all types of errors.",
"The original and corrected sentences in the corpus are aligned and used to identify short transformation patterns in the form of (incorrect phrase, correct phrase). The length of each pattern is the affected phrase, plus up to one token of context on both sides. If a word form changes between the incorrect and correct text, it is fully saved in the pattern, otherwise the POS tags are used for matching."
],
"extractive_spans": [
"patterns for generating all types of errors"
],
"free_form_answer": "",
"highlighted_evidence": [
"We also describe a method for AEG using patterns over words and part-of-speech (POS) tags, extracting known incorrect sequences from a corpus of annotated corrections. This approach is based on the best method identified by Felice2014a, using error type distributions; while they covered only 5 error types, we relax this restriction and learn patterns for generating all types of errors.",
"The original and corrected sentences in the corpus are aligned and used to identify short transformation patterns in the form of (incorrect phrase, correct phrase). The length of each pattern is the affected phrase, plus up to one token of context on both sides. If a word form changes between the incorrect and correct text, it is fully saved in the pattern, otherwise the POS tags are used for matching."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"1faaa44c576625887d55ba35ed862ec9b161a528",
"548a9ba34ce0a0e7b58d92cfc73b3e0a9b406b37"
],
"answer": [
{
"evidence": [
"We evaluated our detection models on three benchmarks: the FCE test data (41K tokens) and the two alternative annotations of the CoNLL 2014 Shared Task dataset (30K tokens) BIBREF3 . Each artificial error generation system was used to generate 3 different versions of the artificial data, which were then combined with the original annotated dataset and used for training an error detection system. Table TABREF1 contains example sentences from the error generation systems, highlighting each of the edits that are marked as errors."
],
"extractive_spans": [
" FCE test data (41K tokens) and the two alternative annotations of the CoNLL 2014 Shared Task dataset (30K tokens) "
],
"free_form_answer": "",
"highlighted_evidence": [
"We evaluated our detection models on three benchmarks: the FCE test data (41K tokens) and the two alternative annotations of the CoNLL 2014 Shared Task dataset (30K tokens) BIBREF3 . "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We trained our error generation models on the public FCE training set BIBREF16 and used them to generate additional artificial training data. Grammatically correct text is needed as the starting point for inserting artificial errors, and we used two different sources: 1) the corrected version of the same FCE training set on which the system is trained (450K tokens), and 2) example sentences extracted from the English Vocabulary Profile (270K tokens).. While there are other text corpora that could be used (e.g., Wikipedia and news articles), our development experiments showed that keeping the writing style and vocabulary close to the target domain gives better results compared to simply including more data.",
"We evaluated our detection models on three benchmarks: the FCE test data (41K tokens) and the two alternative annotations of the CoNLL 2014 Shared Task dataset (30K tokens) BIBREF3 . Each artificial error generation system was used to generate 3 different versions of the artificial data, which were then combined with the original annotated dataset and used for training an error detection system. Table TABREF1 contains example sentences from the error generation systems, highlighting each of the edits that are marked as errors."
],
"extractive_spans": [
"FCE ",
" two alternative annotations of the CoNLL 2014 Shared Task dataset"
],
"free_form_answer": "",
"highlighted_evidence": [
"We trained our error generation models on the public FCE training set BIBREF16 and used them to generate additional artificial training data. ",
"We evaluated our detection models on three benchmarks: the FCE test data (41K tokens) and the two alternative annotations of the CoNLL 2014 Shared Task dataset (30K tokens) BIBREF3 ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"5b192b015c89c56f449ab079f4f07a56bcfdfd15",
"605b6e58739c3f8b6545e241915fa0e727c49a31"
],
"answer": [
{
"evidence": [
"We trained our error generation models on the public FCE training set BIBREF16 and used them to generate additional artificial training data. Grammatically correct text is needed as the starting point for inserting artificial errors, and we used two different sources: 1) the corrected version of the same FCE training set on which the system is trained (450K tokens), and 2) example sentences extracted from the English Vocabulary Profile (270K tokens).. While there are other text corpora that could be used (e.g., Wikipedia and news articles), our development experiments showed that keeping the writing style and vocabulary close to the target domain gives better results compared to simply including more data."
],
"extractive_spans": [
"English "
],
"free_form_answer": "",
"highlighted_evidence": [
". Grammatically correct text is needed as the starting point for inserting artificial errors, and we used two different sources: 1) the corrected version of the same FCE training set on which the system is trained (450K tokens), and 2) example sentences extracted from the English Vocabulary Profile (270K tokens)."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We trained our error generation models on the public FCE training set BIBREF16 and used them to generate additional artificial training data. Grammatically correct text is needed as the starting point for inserting artificial errors, and we used two different sources: 1) the corrected version of the same FCE training set on which the system is trained (450K tokens), and 2) example sentences extracted from the English Vocabulary Profile (270K tokens).. While there are other text corpora that could be used (e.g., Wikipedia and news articles), our development experiments showed that keeping the writing style and vocabulary close to the target domain gives better results compared to simply including more data."
],
"extractive_spans": [
"English "
],
"free_form_answer": "",
"highlighted_evidence": [
"We trained our error generation models on the public FCE training set BIBREF16 and used them to generate additional artificial training data. Grammatically correct text is needed as the starting point for inserting artificial errors, and we used two different sources: 1) the corrected version of the same FCE training set on which the system is trained (450K tokens), and 2) example sentences extracted from the English Vocabulary Profile (270K tokens).. While there are other text corpora that could be used (e.g., Wikipedia and news articles), our development experiments showed that keeping the writing style and vocabulary close to the target domain gives better results compared to simply including more data."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
}
],
"nlp_background": [
"",
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
"",
""
],
"question": [
"What was the baseline used?",
"What are their results on both datasets?",
"What textual patterns are extracted?",
"Which annotated corpus did they use?",
"Which languages are explored in this paper?"
],
"question_id": [
"7b4992e2d26577246a16ac0d1efc995ab4695d24",
"ab9b0bde6113ffef8eb1c39919d21e5913a05081",
"9a9d225f9ac35ed35ea02f554f6056af3b42471d",
"ea56148a8356a1918bedcf0a99ae667c27792cfe",
"cd32a38e0f33b137ab590e1677e8fb073724df7f"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
"",
""
]
} | {
"caption": [
"Table 1: Example artificial errors generated by three systems: the error generation method by Felice and Yuan (2014) (FY14), our pattern-based method covering all error types (PAT), and the machine translation approach to artificial error generation (MT).",
"Table 2: Error detection performance when combining manually annotated and artificial training data.",
"Figure 1: F0.5 on FCE development set with increasing amounts of artificial data from SMT."
],
"file": [
"2-Table1-1.png",
"4-Table2-1.png",
"4-Figure1-1.png"
]
} | [
"What are their results on both datasets?"
] | [
[
"1707.05236-4-Table2-1.png",
"1707.05236-Evaluation-3",
"1707.05236-Evaluation-2"
]
] | [
"Combining pattern based and Machine translation approaches gave the best overall F0.5 scores. It was 49.11 for FCE dataset , 21.87 for the first annotation of CoNLL-14, and 30.13 for the second annotation of CoNLL-14. "
] | 130 |
1810.04428 | Improving Neural Text Simplification Model with Simplified Corpora | Text simplification (TS) can be viewed as monolingual translation task, translating between text variations within a single language. Recent neural TS models draw on insights from neural machine translation to learn lexical simplification and content reduction using encoder-decoder model. But different from neural machine translation, we cannot obtain enough ordinary and simplified sentence pairs for TS, which are expensive and time-consuming to build. Target-side simplified sentences plays an important role in boosting fluency for statistical TS, and we investigate the use of simplified sentences to train, with no changes to the network architecture. We propose to pair simple training sentence with a synthetic ordinary sentence via back-translation, and treating this synthetic data as additional training data. We train encoder-decoder model using synthetic sentence pairs and original sentence pairs, which can obtain substantial improvements on the available WikiLarge data and WikiSmall data compared with the state-of-the-art methods. | {
"paragraphs": [
[
"Text simplification aims to reduce the lexical and structural complexity of a text, while still retaining the semantic meaning, which can help children, non-native speakers, and people with cognitive disabilities, to understand text better. One of the methods of automatic text simplification can be generally divided into three categories: lexical simplification (LS) BIBREF0 , BIBREF1 , rule-based BIBREF2 , and machine translation (MT) BIBREF3 , BIBREF4 . LS is mainly used to simplify text by substituting infrequent and difficult words with frequent and easier words. However, there are several challenges for the LS approach: a great number of transformation rules are required for reasonable coverage and should be applied based on the specific context; third, the syntax and semantic meaning of the sentence is hard to retain. Rule-based approaches use hand-crafted rules for lexical and syntactic simplification, for example, substituting difficult words in a predefined vocabulary. However, such approaches need a lot of human-involvement to manually define these rules, and it is impossible to give all possible simplification rules. MT-based approach has attracted great attention in the last several years, which addresses text simplification as a monolingual machine translation problem translating from 'ordinary' and 'simplified' sentences.",
"In recent years, neural Machine Translation (NMT) is a newly-proposed deep learning approach and achieves very impressive results BIBREF5 , BIBREF6 , BIBREF7 . Unlike the traditional phrased-based machine translation system which operates on small components separately, NMT system is being trained end-to-end, without the need to have external decoders, language models or phrase tables. Therefore, the existing architectures in NMT are used for text simplification BIBREF8 , BIBREF4 . However, most recent work using NMT is limited to the training data that are scarce and expensive to build. Language models trained on simplified corpora have played a central role in statistical text simplification BIBREF9 , BIBREF10 . One main reason is the amount of available simplified corpora typically far exceeds the amount of parallel data. The performance of models can be typically improved when trained on more data. Therefore, we expect simplified corpora to be especially helpful for NMT models.",
"In contrast to previous work, which uses the existing NMT models, we explore strategy to include simplified training corpora in the training process without changing the neural network architecture. We first propose to pair simplified training sentences with synthetic ordinary sentences during training, and treat this synthetic data as additional training data. We obtain synthetic ordinary sentences through back-translation, i.e. an automatic translation of the simplified sentence into the ordinary sentence BIBREF11 . Then, we mix the synthetic data into the original (simplified-ordinary) data to train NMT model. Experimental results on two publicly available datasets show that we can improve the text simplification quality of NMT models by mixing simplified sentences into the training set over NMT model only using the original training data."
],
[
"Automatic TS is a complicated natural language processing (NLP) task, which consists of lexical and syntactic simplification levels BIBREF12 . It has attracted much attention recently as it could make texts more accessible to wider audiences, and used as a pre-processing step, improve performances of various NLP tasks and systems BIBREF13 , BIBREF14 , BIBREF15 . Usually, hand-crafted, supervised, and unsupervised methods based on resources like English Wikipedia and Simple English Wikipedia (EW-SEW) BIBREF10 are utilized for extracting simplification rules. It is very easy to mix up the automatic TS task and the automatic summarization task BIBREF3 , BIBREF16 , BIBREF6 . TS is different from text summarization as the focus of text summarization is to reduce the length and redundant content.",
"At the lexical level, lexical simplification systems often substitute difficult words using more common words, which only require a large corpus of regular text to obtain word embeddings to get words similar to the complex word BIBREF1 , BIBREF9 . Biran et al. BIBREF0 adopted an unsupervised method for learning pairs of complex and simpler synonyms from a corpus consisting of Wikipedia and Simple Wikipedia. At the sentence level, a sentence simplification model was proposed by tree transformation based on statistical machine translation (SMT) BIBREF3 . Woodsend and Lapata BIBREF17 presented a data-driven model based on a quasi-synchronous grammar, a formalism that can naturally capture structural mismatches and complex rewrite operations. Wubben et al. BIBREF18 proposed a phrase-based machine translation (PBMT) model that is trained on ordinary-simplified sentence pairs. Xu et al. BIBREF19 proposed a syntax-based machine translation model using simplification-specific objective functions and features to encourage simpler output.",
"Compared with SMT, neural machine translation (NMT) has shown to produce state-of-the-art results BIBREF5 , BIBREF7 . The central approach of NMT is an encoder-decoder architecture implemented by recurrent neural networks, which can represent the input sequence as a vector, and then decode that vector into an output sequence. Therefore, NMT models were used for text simplification task, and achieved good results BIBREF8 , BIBREF4 , BIBREF20 . The main limitation of the aforementioned NMT models for text simplification depended on the parallel ordinary-simplified sentence pairs. Because ordinary-simplified sentence pairs are expensive and time-consuming to build, the available largest data is EW-SEW that only have 296,402 sentence pairs. The dataset is insufficiency for NMT model if we want to NMT model can obtain the best parameters. Considering simplified data plays an important role in boosting fluency for phrase-based text simplification, and we investigate the use of simplified data for text simplification. We are the first to show that we can effectively adapt neural translation models for text simplifiation with simplified corpora."
],
[
"We collected a simplified dataset from Simple English Wikipedia that are freely available, which has been previously used for many text simplification methods BIBREF0 , BIBREF10 , BIBREF3 . The simple English Wikipedia is pretty easy to understand than normal English Wikipedia. We downloaded all articles from Simple English Wikipedia. For these articles, we removed stubs, navigation pages and any article that consisted of a single sentence. We then split them into sentences with the Stanford CorNLP BIBREF21 , and deleted these sentences whose number of words are smaller than 10 or large than 40. After removing repeated sentences, we chose 600K sentences as the simplified data with 11.6M words, and the size of vocabulary is 82K."
],
[
"Our work is built on attention-based NMT BIBREF5 as an encoder-decoder network with recurrent neural networks (RNN), which simultaneously conducts dynamic alignment and generation of the target simplified sentence.",
"The encoder uses a bidirectional RNN that consists of forward and backward RNN. Given a source sentence INLINEFORM0 , the forward RNN and backward RNN calculate forward hidden states INLINEFORM1 and backward hidden states INLINEFORM2 , respectively. The annotation vector INLINEFORM3 is obtained by concatenating INLINEFORM4 and INLINEFORM5 .",
"The decoder is a RNN that predicts a target simplificated sentence with Gated Recurrent Unit (GRU) BIBREF22 . Given the previously generated target (simplified) sentence INLINEFORM0 , the probability of next target word INLINEFORM1 is DISPLAYFORM0 ",
"where INLINEFORM0 is a non-linear function, INLINEFORM1 is the embedding of INLINEFORM2 , and INLINEFORM3 is a decoding state for time step INLINEFORM4 .",
"State INLINEFORM0 is calculated by DISPLAYFORM0 ",
"where INLINEFORM0 is the activation function GRU.",
"The INLINEFORM0 is the context vector computed as a weighted annotation INLINEFORM1 , computed by DISPLAYFORM0 ",
"where the weight INLINEFORM0 is computed by DISPLAYFORM0 DISPLAYFORM1 ",
"where INLINEFORM0 , INLINEFORM1 and INLINEFORM2 are weight matrices. The training objective is to maximize the likelihood of the training data. Beam search is employed for decoding."
],
[
"We train an auxiliary system using NMT model from the simplified sentence to the ordinary sentence, which is first trained on the available parallel data. For leveraging simplified sentences to improve the quality of NMT model for text simplification, we propose to adapt the back-translation approach proposed by Sennrich et al. BIBREF11 to our scenario. More concretely, Given one sentence in simplified sentences, we use the simplified-ordinary system in translate mode with greedy decoding to translate it to the ordinary sentences, which is denoted as back-translation. This way, we obtain a synthetic parallel simplified-ordinary sentences. Both the synthetic sentences and the available parallel data are used as training data for the original NMT system."
],
[
"We evaluate the performance of text simplification using neural machine translation on available parallel sentences and additional simplified sentences.",
"Dataset. We use two simplification datasets (WikiSmall and WikiLarge). WikiSmall consists of ordinary and simplified sentences from the ordinary and simple English Wikipedias, which has been used as benchmark for evaluating text simplification BIBREF17 , BIBREF18 , BIBREF8 . The training set has 89,042 sentence pairs, and the test set has 100 pairs. WikiLarge is also from Wikipedia corpus whose training set contains 296,402 sentence pairs BIBREF19 , BIBREF20 . WikiLarge includes 8 (reference) simplifications for 2,359 sentences split into 2,000 for development and 359 for testing.",
"Metrics. Three metrics in text simplification are chosen in this paper. BLEU BIBREF5 is one traditional machine translation metric to assess the degree to which translated simplifications differed from reference simplifications. FKGL measures the readability of the output BIBREF23 . A small FKGL represents simpler output. SARI is a recent text-simplification metric by comparing the output against the source and reference simplifications BIBREF20 .",
"We evaluate the output of all systems using human evaluation. The metric is denoted as Simplicity BIBREF8 . The three non-native fluent English speakers are shown reference sentences and output sentences. They are asked whether the output sentence is much simpler (+2), somewhat simpler (+1), equally (0), somewhat more difficult (-1), and much more difficult (-2) than the reference sentence.",
"Methods. We use OpenNMT BIBREF24 as the implementation of the NMT system for all experiments BIBREF5 . We generally follow the default settings and training procedure described by Klein et al.(2017). We replace out-of-vocabulary words with a special UNK symbol. At prediction time, we replace UNK words with the highest probability score from the attention layer. OpenNMT system used on parallel data is the baseline system. To obtain a synthetic parallel training set, we back-translate a random sample of 100K sentences from the collected simplified corpora. OpenNMT used on parallel data and synthetic data is our model. The benchmarks are run on a Intel(R) Core(TM) i7-5930K [email protected], 32GB Mem, trained on 1 GPU GeForce GTX 1080 (Pascal) with CUDA v. 8.0.",
"We choose three statistical text simplification systems. PBMT-R is a phrase-based method with a reranking post-processing step BIBREF18 . Hybrid performs sentence splitting and deletion operations based on discourse representation structures, and then simplifies sentences with PBMT-R BIBREF25 . SBMT-SARI BIBREF19 is syntax-based translation model using PPDB paraphrase database BIBREF26 and modifies tuning function (using SARI). We choose two neural text simplification systems. NMT is a basic attention-based encoder-decoder model which uses OpenNMT framework to train with two LSTM layers, hidden states of size 500 and 500 hidden units, SGD optimizer, and a dropout rate of 0.3 BIBREF8 . Dress is an encoder-decoder model coupled with a deep reinforcement learning framework, and the parameters are chosen according to the original paper BIBREF20 . For the experiments with synthetic parallel data, we back-translate a random sample of 60 000 sentences from the collected simplified sentences into ordinary sentences. Our model is trained on synthetic data and the available parallel data, denoted as NMT+synthetic.",
"Results. Table 1 shows the results of all models on WikiLarge dataset. We can see that our method (NMT+synthetic) can obtain higher BLEU, lower FKGL and high SARI compared with other models, except Dress on FKGL and SBMT-SARI on SARI. It verified that including synthetic data during training is very effective, and yields an improvement over our baseline NMF by 2.11 BLEU, 1.7 FKGL and 1.07 SARI. We also substantially outperform Dress, who previously reported SOTA result. The results of our human evaluation using Simplicity are also presented in Table 1. NMT on synthetic data is significantly better than PBMT-R, Dress, and SBMT-SARI on Simplicity. It indicates that our method with simplified data is effective at creating simpler output.",
"Results on WikiSmall dataset are shown in Table 2. We see substantial improvements (6.37 BLEU) than NMT from adding simplified training data with synthetic ordinary sentences. Compared with statistical machine translation models (PBMT-R, Hybrid, SBMT-SARI), our method (NMT+synthetic) still have better results, but slightly worse FKGL and SARI. Similar to the results in WikiLarge, the results of our human evaluation using Simplicity outperforms the other models. In conclusion, Our method produces better results comparing with the baselines, which demonstrates the effectiveness of adding simplified training data."
],
[
"In this paper, we propose one simple method to use simplified corpora during training of NMT systems, with no changes to the network architecture. In the experiments on two datasets, we achieve substantial gains in all tasks, and new SOTA results, via back-translation of simplified sentences into the ordinary sentences, and treating this synthetic data as additional training data. Because we do not change the neural network architecture to integrate simplified corpora, our method can be easily applied to other Neural Text Simplification (NTS) systems. We expect that the effectiveness of our method not only varies with the quality of the NTS system used for back-translation, but also depends on the amount of available parallel and simplified corpora. In the paper, we have only utilized data from Wikipedia for simplified sentences. In the future, many other text sources are available and the impact of not only size, but also of domain should be investigated."
]
],
"section_name": [
"Introduction",
"Related Work",
"Simplified Corpora",
"Text Simplification using Neural Machine Translation",
"Synthetic Simplified Sentences",
"Evaluation",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"20f32998688fbc1ba3b5abe1e3b8fdf91c183161",
"6994f02a03ed49e9bca899a82ff642067de0120a"
],
"answer": [
{
"evidence": [
"We collected a simplified dataset from Simple English Wikipedia that are freely available, which has been previously used for many text simplification methods BIBREF0 , BIBREF10 , BIBREF3 . The simple English Wikipedia is pretty easy to understand than normal English Wikipedia. We downloaded all articles from Simple English Wikipedia. For these articles, we removed stubs, navigation pages and any article that consisted of a single sentence. We then split them into sentences with the Stanford CorNLP BIBREF21 , and deleted these sentences whose number of words are smaller than 10 or large than 40. After removing repeated sentences, we chose 600K sentences as the simplified data with 11.6M words, and the size of vocabulary is 82K."
],
"extractive_spans": [
"English"
],
"free_form_answer": "",
"highlighted_evidence": [
"We collected a simplified dataset from Simple English Wikipedia that are freely available, which has been previously used for many text simplification methods BIBREF0 , BIBREF10 , BIBREF3 ."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We collected a simplified dataset from Simple English Wikipedia that are freely available, which has been previously used for many text simplification methods BIBREF0 , BIBREF10 , BIBREF3 . The simple English Wikipedia is pretty easy to understand than normal English Wikipedia. We downloaded all articles from Simple English Wikipedia. For these articles, we removed stubs, navigation pages and any article that consisted of a single sentence. We then split them into sentences with the Stanford CorNLP BIBREF21 , and deleted these sentences whose number of words are smaller than 10 or large than 40. After removing repeated sentences, we chose 600K sentences as the simplified data with 11.6M words, and the size of vocabulary is 82K."
],
"extractive_spans": [
"Simple English"
],
"free_form_answer": "",
"highlighted_evidence": [
"We collected a simplified dataset from Simple English Wikipedia that are freely available, which has been previously used for many text simplification methods BIBREF0 , BIBREF10 , BIBREF3 . "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"1b823552ea3800ea76408fa88292ced8749bb03c",
"4865fad32d7769dfb059d12a17fdc4169b3d3873"
],
"answer": [
{
"evidence": [
"Metrics. Three metrics in text simplification are chosen in this paper. BLEU BIBREF5 is one traditional machine translation metric to assess the degree to which translated simplifications differed from reference simplifications. FKGL measures the readability of the output BIBREF23 . A small FKGL represents simpler output. SARI is a recent text-simplification metric by comparing the output against the source and reference simplifications BIBREF20 ."
],
"extractive_spans": [
"BLEU ",
"FKGL ",
"SARI "
],
"free_form_answer": "",
"highlighted_evidence": [
"Three metrics in text simplification are chosen in this paper. BLEU BIBREF5 is one traditional machine translation metric to assess the degree to which translated simplifications differed from reference simplifications. FKGL measures the readability of the output BIBREF23 . A small FKGL represents simpler output. SARI is a recent text-simplification metric by comparing the output against the source and reference simplifications BIBREF20 ."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Metrics. Three metrics in text simplification are chosen in this paper. BLEU BIBREF5 is one traditional machine translation metric to assess the degree to which translated simplifications differed from reference simplifications. FKGL measures the readability of the output BIBREF23 . A small FKGL represents simpler output. SARI is a recent text-simplification metric by comparing the output against the source and reference simplifications BIBREF20 .",
"We evaluate the output of all systems using human evaluation. The metric is denoted as Simplicity BIBREF8 . The three non-native fluent English speakers are shown reference sentences and output sentences. They are asked whether the output sentence is much simpler (+2), somewhat simpler (+1), equally (0), somewhat more difficult (-1), and much more difficult (-2) than the reference sentence."
],
"extractive_spans": [
"BLEU",
"FKGL",
"SARI",
"Simplicity"
],
"free_form_answer": "",
"highlighted_evidence": [
"Three metrics in text simplification are chosen in this paper. BLEU BIBREF5 is one traditional machine translation metric to assess the degree to which translated simplifications differed from reference simplifications. FKGL measures the readability of the output BIBREF23 . A small FKGL represents simpler output. SARI is a recent text-simplification metric by comparing the output against the source and reference simplifications BIBREF20 .",
"We evaluate the output of all systems using human evaluation. The metric is denoted as Simplicity BIBREF8 ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"28e5524127b99b32ed80a8a9190f43956d04ef9d",
"3fac20259ebcd7fa5f7fb4bbcbdd449a8653368a"
],
"answer": [
{
"evidence": [
"Results. Table 1 shows the results of all models on WikiLarge dataset. We can see that our method (NMT+synthetic) can obtain higher BLEU, lower FKGL and high SARI compared with other models, except Dress on FKGL and SBMT-SARI on SARI. It verified that including synthetic data during training is very effective, and yields an improvement over our baseline NMF by 2.11 BLEU, 1.7 FKGL and 1.07 SARI. We also substantially outperform Dress, who previously reported SOTA result. The results of our human evaluation using Simplicity are also presented in Table 1. NMT on synthetic data is significantly better than PBMT-R, Dress, and SBMT-SARI on Simplicity. It indicates that our method with simplified data is effective at creating simpler output.",
"Results on WikiSmall dataset are shown in Table 2. We see substantial improvements (6.37 BLEU) than NMT from adding simplified training data with synthetic ordinary sentences. Compared with statistical machine translation models (PBMT-R, Hybrid, SBMT-SARI), our method (NMT+synthetic) still have better results, but slightly worse FKGL and SARI. Similar to the results in WikiLarge, the results of our human evaluation using Simplicity outperforms the other models. In conclusion, Our method produces better results comparing with the baselines, which demonstrates the effectiveness of adding simplified training data."
],
"extractive_spans": [],
"free_form_answer": "For the WikiLarge dataset, the improvement over baseline NMT is 2.11 BLEU, 1.7 FKGL and 1.07 SARI.\nFor the WikiSmall dataset, the improvement over baseline NMT is 8.37 BLEU.",
"highlighted_evidence": [
" Table 1 shows the results of all models on WikiLarge dataset. We can see that our method (NMT+synthetic) can obtain higher BLEU, lower FKGL and high SARI compared with other models, except Dress on FKGL and SBMT-SARI on SARI. It verified that including synthetic data during training is very effective, and yields an improvement over our baseline NMF by 2.11 BLEU, 1.7 FKGL and 1.07 SARI. ",
"Results on WikiSmall dataset are shown in Table 2. We see substantial improvements (6.37 BLEU) than NMT from adding simplified training data with synthetic ordinary sentences. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Results on WikiSmall dataset are shown in Table 2. We see substantial improvements (6.37 BLEU) than NMT from adding simplified training data with synthetic ordinary sentences. Compared with statistical machine translation models (PBMT-R, Hybrid, SBMT-SARI), our method (NMT+synthetic) still have better results, but slightly worse FKGL and SARI. Similar to the results in WikiLarge, the results of our human evaluation using Simplicity outperforms the other models. In conclusion, Our method produces better results comparing with the baselines, which demonstrates the effectiveness of adding simplified training data."
],
"extractive_spans": [
"6.37 BLEU"
],
"free_form_answer": "",
"highlighted_evidence": [
"We see substantial improvements (6.37 BLEU) than NMT from adding simplified training data with synthetic ordinary sentences."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"1e0e6d31b1e3dd242f9c459d665703a68c1d819e"
],
"answer": [
{
"evidence": [
"Methods. We use OpenNMT BIBREF24 as the implementation of the NMT system for all experiments BIBREF5 . We generally follow the default settings and training procedure described by Klein et al.(2017). We replace out-of-vocabulary words with a special UNK symbol. At prediction time, we replace UNK words with the highest probability score from the attention layer. OpenNMT system used on parallel data is the baseline system. To obtain a synthetic parallel training set, we back-translate a random sample of 100K sentences from the collected simplified corpora. OpenNMT used on parallel data and synthetic data is our model. The benchmarks are run on a Intel(R) Core(TM) i7-5930K [email protected], 32GB Mem, trained on 1 GPU GeForce GTX 1080 (Pascal) with CUDA v. 8.0.",
"We choose three statistical text simplification systems. PBMT-R is a phrase-based method with a reranking post-processing step BIBREF18 . Hybrid performs sentence splitting and deletion operations based on discourse representation structures, and then simplifies sentences with PBMT-R BIBREF25 . SBMT-SARI BIBREF19 is syntax-based translation model using PPDB paraphrase database BIBREF26 and modifies tuning function (using SARI). We choose two neural text simplification systems. NMT is a basic attention-based encoder-decoder model which uses OpenNMT framework to train with two LSTM layers, hidden states of size 500 and 500 hidden units, SGD optimizer, and a dropout rate of 0.3 BIBREF8 . Dress is an encoder-decoder model coupled with a deep reinforcement learning framework, and the parameters are chosen according to the original paper BIBREF20 . For the experiments with synthetic parallel data, we back-translate a random sample of 60 000 sentences from the collected simplified sentences into ordinary sentences. Our model is trained on synthetic data and the available parallel data, denoted as NMT+synthetic."
],
"extractive_spans": [
"OpenNMT",
"PBMT-R",
"Hybrid",
"SBMT-SARI",
"Dress"
],
"free_form_answer": "",
"highlighted_evidence": [
"We use OpenNMT BIBREF24 as the implementation of the NMT system for all experiments BIBREF5 .",
"PBMT-R is a phrase-based method with a reranking post-processing step BIBREF18 . Hybrid performs sentence splitting and deletion operations based on discourse representation structures, and then simplifies sentences with PBMT-R BIBREF25 . SBMT-SARI BIBREF19 is syntax-based translation model using PPDB paraphrase database BIBREF26 and modifies tuning function (using SARI).",
"Dress is an encoder-decoder model coupled with a deep reinforcement learning framework, and the parameters are chosen according to the original paper BIBREF20 ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"45cc83e1b1cd503b1636123df45066ecdb167ff0",
"d40f635de72ea1ca998611299d4a948a172027ce"
],
"answer": [
{
"evidence": [
"Dataset. We use two simplification datasets (WikiSmall and WikiLarge). WikiSmall consists of ordinary and simplified sentences from the ordinary and simple English Wikipedias, which has been used as benchmark for evaluating text simplification BIBREF17 , BIBREF18 , BIBREF8 . The training set has 89,042 sentence pairs, and the test set has 100 pairs. WikiLarge is also from Wikipedia corpus whose training set contains 296,402 sentence pairs BIBREF19 , BIBREF20 . WikiLarge includes 8 (reference) simplifications for 2,359 sentences split into 2,000 for development and 359 for testing."
],
"extractive_spans": [
"training set has 89,042 sentence pairs, and the test set has 100 pairs",
"training set contains 296,402",
"2,000 for development and 359 for testing"
],
"free_form_answer": "",
"highlighted_evidence": [
"WikiSmall consists of ordinary and simplified sentences from the ordinary and simple English Wikipedias, which has been used as benchmark for evaluating text simplification BIBREF17 , BIBREF18 , BIBREF8 . The training set has 89,042 sentence pairs, and the test set has 100 pairs. WikiLarge is also from Wikipedia corpus whose training set contains 296,402 sentence pairs BIBREF19 , BIBREF20 . WikiLarge includes 8 (reference) simplifications for 2,359 sentences split into 2,000 for development and 359 for testing."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Dataset. We use two simplification datasets (WikiSmall and WikiLarge). WikiSmall consists of ordinary and simplified sentences from the ordinary and simple English Wikipedias, which has been used as benchmark for evaluating text simplification BIBREF17 , BIBREF18 , BIBREF8 . The training set has 89,042 sentence pairs, and the test set has 100 pairs. WikiLarge is also from Wikipedia corpus whose training set contains 296,402 sentence pairs BIBREF19 , BIBREF20 . WikiLarge includes 8 (reference) simplifications for 2,359 sentences split into 2,000 for development and 359 for testing."
],
"extractive_spans": [],
"free_form_answer": "WikiSmall 89 142 sentence pair and WikiLarge 298 761 sentence pairs. ",
"highlighted_evidence": [
"We use two simplification datasets (WikiSmall and WikiLarge). WikiSmall consists of ordinary and simplified sentences from the ordinary and simple English Wikipedias, which has been used as benchmark for evaluating text simplification BIBREF17 , BIBREF18 , BIBREF8 . The training set has 89,042 sentence pairs, and the test set has 100 pairs. WikiLarge is also from Wikipedia corpus whose training set contains 296,402 sentence pairs BIBREF19 , BIBREF20 . WikiLarge includes 8 (reference) simplifications for 2,359 sentences split into 2,000 for development and 359 for testing."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
],
"nlp_background": [
"",
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
"",
""
],
"question": [
"what language does this paper focus on?",
"what evaluation metrics did they use?",
"by how much did their model improve?",
"what state of the art methods did they compare with?",
"what are the sizes of both datasets?"
],
"question_id": [
"2c6b50877133a499502feb79a682f4023ddab63e",
"f651cd144b7749e82aa1374779700812f64c8799",
"4625cfba3083346a96e573af5464bc26c34ec943",
"326588b1de9ba0fd049ab37c907e6e5413e14acd",
"ebf0d9f9260ed61cbfd79b962df3899d05f9ebfb"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
"",
""
]
} | {
"caption": [
"Table 1: The evaluation results on WikiLarge. The best results are highlighted in bold on each metric."
],
"file": [
"5-Table1-1.png"
]
} | [
"by how much did their model improve?",
"what are the sizes of both datasets?"
] | [
[
"1810.04428-Evaluation-7",
"1810.04428-Evaluation-6"
],
[
"1810.04428-Evaluation-1"
]
] | [
"For the WikiLarge dataset, the improvement over baseline NMT is 2.11 BLEU, 1.7 FKGL and 1.07 SARI.\nFor the WikiSmall dataset, the improvement over baseline NMT is 8.37 BLEU.",
"WikiSmall 89 142 sentence pair and WikiLarge 298 761 sentence pairs. "
] | 131 |
2004.02192 | Arabic Offensive Language on Twitter: Analysis and Experiments | Detecting offensive language on Twitter has many applications ranging from detecting/predicting bullying to measuring polarization. In this paper, we focus on building effective Arabic offensive tweet detection. We introduce a method for building an offensive dataset that is not biased by topic, dialect, or target. We produce the largest Arabic dataset to date with special tags for vulgarity and hate speech. Next, we analyze the dataset to determine which topics, dialects, and gender are most associated with offensive tweets and how Arabic speakers use offensive language. Lastly, we conduct a large battery of experiments to produce strong results (F1 = 79.7) on the dataset using Support Vector Machine techniques. | {
"paragraphs": [
[
"Disclaimer: Due to the nature of the paper, some examples contain highly offensive language and hate speech. They don't reflect the views of the authors in any way, and the point of the paper is to help fight such speech. Much recent interest has focused on the detection of offensive language and hate speech in online social media. Such language is often associated with undesirable online behaviors such as trolling, cyberbullying, online extremism, political polarization, and propaganda. Thus, offensive language detection is instrumental for a variety of application such as: quantifying polarization BIBREF0, BIBREF1, trolls and propaganda account detection BIBREF2, detecting the likelihood of hate crimes BIBREF3; and predicting conflict BIBREF4. In this paper, we describe our methodology for building a large dataset of Arabic offensive tweets. Given that roughly 1-2% of all Arabic tweets are offensive BIBREF5, targeted annotation is essential for efficiently building a large dataset. Since our methodology does not use a seed list of offensive words, it is not biased by topic, target, or dialect. Using our methodology, we tagged 10,000 Arabic tweet dataset for offensiveness, where offensive tweets account for roughly 19% of the tweets. Further, we labeled tweets as vulgar or hate speech. To date, this is the largest available dataset, which we plan to make publicly available along with annotation guidelines. We use this dataset to characterize Arabic offensive language to ascertain the topics, dialects, and users' gender that are most associated with the use of offensive language. Though we suspect that there are common features that span different languages and cultures, some characteristics of Arabic offensive language is language and culture specific. Thus, we conduct a thorough analysis of how Arabic users use offensive language. Next, we use the dataset to train strong Arabic offensive language classifiers using state-of-the-art representations and classification techniques. Specifically, we experiment with static and contextualized embeddings for representation along with a variety of classifiers such as a deep neural network classifier and Support Vector Machine (SVM).",
"The contributions of this paper are as follows:",
"We built the largest Arabic offensive language dataset to date that includes special tags for vulgar language and hate speech. We describe the methodology for building it along with annotation guidelines.",
"We performed thorough analysis of the dataset and described the peculiarities of Arabic offensive language.",
"We experimented with Support Vector Machine classifiers on character and word ngrams classification techniques to provide strong results on Arabic offensive language classification."
],
[
"Many recent papers have focused on the detection of offensive language, including hate speech BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10, BIBREF11, BIBREF12, BIBREF13. Offensive language can be categorized as: Vulgar, which include explicit and rude sexual references, Pornographic, and Hateful, which includes offensive remarks concerning people’s race, religion, country, etc. BIBREF14. Prior works have concentrated on building annotated corpora and training classification models. Concerning corpora, hatespeechdata.com attempts to maintain an updated list of hate speech corpora for multiple languages including Arabic and English. Further, SemEval 2019 ran an evaluation task targeted at detecting offensive language, which focused exclusively on English BIBREF15. As for classification models, most studies used supervised classification at either word level BIBREF10, character sequence level BIBREF11, and word embeddings BIBREF9. The studies used different classification techniques including using Naïve Bayes BIBREF10, SVM BIBREF11, and deep learning BIBREF6, BIBREF7, BIBREF12 classification. The accuracy of the aforementioned system ranged between 76% and 90%. Earlier work looked at the use of sentiment words as features as well as contextual features BIBREF13.",
"The work on Arabic offensive language detection is relatively nascent BIBREF16, BIBREF17, BIBREF18, BIBREF19, BIBREF5. Mubarak et al. mubarak2017abusive suggested that certain users are more likely to use offensive languages than others, and they used this insight to build a list of offensive Arabic words and they constructed a labeled set of 1,100 tweets. Abozinadah et al. abozinadah2017detecting used supervised classification based on a variety of features including user profile features, textual features, and network features. They reported an accuracy of nearly 90%. Alakrot et al. alakrot2018towards used supervised classification based on word unigrams and n-grams to detect offensive language in YouTube comments. They improved classification with stemming and achieved a precision of 88%. Albadi et al. albadi2018they focused on detecting religious hate speech using a recurrent neural network. Further; Schmidt and Wiegand schmidt-wiegand-2017-survey surveyed major works on hate speech detection; Fortuna and Nunes Fortuna2018Survey provided a comprehensive survey for techniques and works done in the area between 2004 and 2017.",
"Arabic is a morphologically rich language with a standard variety, namely Modern Standard Arabic (MSA) and is typically used in formal communication, and many dialectal varieties that differ from MSA in lexical selection, morphology, and syntactic structures. For MSA, words are typically derived from a set of thousands of roots by fitting a root into a stem template and the resulting stem may accept a variety of prefixes and suffixes such as coordinating conjunctions and pronouns. Though word segmentation (or stemming) is quite accurate for MSA BIBREF20, with accuracy approaching 99%, dialectal segmentation is not sufficiently reliable, with accuracy ranging between 91-95% for different dialects BIBREF21. Since dialectal Arabic is ubiquitous in Arabic tweets and many tweets have creative spellings of words, recent work on Arabic offensive language detection used character-level models BIBREF5."
],
[
"Our target was to build a large Arabic offensive language dataset that is representative of their appearance on Twitter and is hopefully not biased to specific dialects, topics, or targets. One of the main challenges is that offensive tweets constitute a very small portion of overall tweets. To quantify their proportion, we took 3 random samples of tweets from different days, with each sample composed of 1,000 tweets, and we found that between 1% and 2% of them were in fact offensive (including pornographic advertisement). This percentage is consistent with previously reported percentages BIBREF19. Thus, annotating random tweets is grossly inefficient. One way to overcome this problem is to use a seed list of offensive words to filter tweets. However, doing so is problematic as it would skew the dataset to particular types of offensive language or to specific dialects. Offensiveness is often dialect and country specific.",
"After inspecting many tweets, we observed that many offensive tweets have the vocative particle يا> (“yA” – meaning “O”), which is mainly used in directing the speech to a specific person or group. The ratio of offensive tweets increases to 5% if each tweet contains one vocative particle and to 19% if has at least two vocative particles. Users often repeat this particle for emphasis, as in: يا أمي يا حنونة> (“yA Amy yA Hnwnp” – O my mother, O kind one), which is endearing and non-offensive, and يا كلب يا قذر> (“yA klb yA q*r” – “O dog, O dirty one”), which is offensive. We decided to use this pattern to increase our chances of finding offensive tweets. One of the main advantages of the pattern يا ... يا> (“yA ... yA”) is that it is not associated with any specific topic or genre, and it appears in all Arabic dialects. Though the use of offensive language does not necessitate the appearance of the vocative particle, the particle does not favor any specific offensive expressions and greatly improves our chances of finding offensive tweets. It is clear, the dataset is more biased toward positive class. Using the dataset for real-life application may require de-biasing it by boosting negative class or random sampling additional data from Twitter BIBREF22.Using the Twitter API, we collected 660k Arabic tweets having this pattern between April 15, 2019 and May 6, 2019. To increase diversity, we sorted the word sequences between the vocative particles and took the most frequent 10,000 unique sequences. For each word sequence, we took a random tweet containing each sequence. Then we annotated those tweets, ending up with 1,915 offensive tweets which represent roughly 19% of all tweets. Each tweet was labeled as: offensive, which could additionally be labeled as vulgar and/or hate speech, or Clean. We describe in greater detail our annotation guidelines, which we made sure that they are compatible with the OffensEval2019 annotation guidelines BIBREF15. For example, if a tweet has insults or threats targeting a group based on their nationality, ethnicity, gender, political affiliation, religious belief, or other common characteristics, this is considered as hate speech BIBREF15. It is worth mentioning that we also considered insulting groups based on their sport affiliation as a form of hate speech. In most Arab countries, being a fan of a particularly sporting club is considered as part of the personality and ideology which rarely changes over time (similar to political affiliation). Many incidents of violence have occurred among fans of rival clubs."
],
[
"We developed the annotation guidelines jointly with an experienced annotator, who is a native Arabic speaker with a good knowledge of various Arabic dialects. We made sure that our guidelines were compatible with those of OffensEval2019. The annotator carried out all annotation. Tweets were given one or more of the following four labels: offensive, vulgar, hate speech, or clean. Since the offensive label covers both vulgar and hate speech and vulgarity and hate speech are not mutually exclusive, a tweet can be just offensive or offensive and vulgar and/or hate speech. The annotation adhered to the following guidelines:"
],
[
"Offensive tweets contain explicit or implicit insults or attacks against other people, or inappropriate language, such as:",
"Direct threats or incitement, ex: احرقوا> مقرات المعارضة> (“AHrqwA mqrAt AlmEArDp” – “burn the headquarters of the opposition”) and هذا المنافق يجب قتله> (“h*A AlmnAfq yjb qtlh” – “this hypocrite needs to be killed”).",
"Insults and expressions of contempt, which include: Animal analogy, ex: يا كلب> (“yA klb” – “O dog”) and كل تبن> (“kl tbn” – “eat hay”).; Insult to family members, ex: يا روح أمك> (“yA rwH Amk” – “O mother's soul”); Sexually-related insults, ex: يا ديوث> (“yA dywv” – “O person without envy”); Damnation, ex: الله يلعنك> (“Allh ylEnk” – “may Allah/God curse you”); and Attacks on morals and ethics, ex: يا كاذب> (“yA kA*b” – “O liar”)"
],
[
"Vulgar tweets are a subset of offensive tweets and contain profanity, such as mentions of private parts or sexual-related acts or references."
],
[
"Hate speech tweets, a subset of offensive tweets containing offensive language targeting group based on common characteristics such as: Race, ex: يا زنجي> (“yA znjy” – “O negro”); Ethnicity, ex. الفرس الأنجاس> (“Alfrs AlAnjAs” – “Impure Persians”); Group or party, ex: أبوك شيوعي> (“Abwk $ywEy” – “your father is communist”); and Religion, ex: دينك القذر> (“dynk Alq*r” – “your filthy religion”)."
],
[
"Clean tweets do not contain vulgar or offensive language. We noticed that some tweets have some offensive words, but the whole tweet should not be considered as offensive due to the intention of users. This suggests that normal string match without considering contexts will fail in some cases. Examples of such ambiguous cases include: Humor, ex: يا عدوة الفرحة ههه> (“yA Edwp AlfrHp hhh” – “O enemy of happiness hahaha”); Advice, ex: لا تقل لصاحبك يا خنزير> (“lA tql lSAHbk yA xnzyr” – “don't say to your friend: You are a pig”); Condition, ex: إذا عارضتهم يقولون يا عميل> (“A*A EArDthm yqwlwn yA Emyl” – “if you disagree with them they will say: You are an agent”); Condemnation, ex: لماذا نسب بقول: يا بقرة؟> (“lmA*A nsb bqwl: yA bqrp?” – “Why do we insult others by saying: O cow?”); Self offense, ex: تعبت من لساني القذر> (“tEbt mn lsAny Alq*r” – “I am tired of my dirty tongue”); Non-human target, ex: يا بنت المجنونة يا كورة> (“yA bnt Almjnwnp yA kwrp” – “O daughter of the crazy one O football”); and Quotation from a movies or a story, ex: تاني يا زكي! تاني يا فاشل> (“tAny yA zky! tAny yA fA$l” – “again O Zaky! again O loser”). For other ambiguous cases, the annotator searched Twitter to find how actual users used expressions.",
"Table TABREF11 shows the distribution of the annotated tweets. There are 1,915 offensive tweets, including 225 vulgar tweet and 506 hate speech tweets, and 8,085 clean tweets. To validate the quality of annotation, a random sample of 100 tweets from the data, containing 50 offensive and 50 clean tweets, was given to additional three annotators. We calculated the Inter-Annotator Agreement between the annotators using Fleiss’s Kappa coefficient BIBREF23. The Kappa score was 0.92 indicating high quality annotation and agreement."
],
[
"Given the annotated tweets, we wanted to ascertain the distribution of: types of offensive language, genres where it is used, the dialects used, and the gender of users using such language.",
"Figure FIGREF13 shows the distribution of topics associated with offensive tweets. As the figure shows, sports and politics are most dominant for offensive language including vulgar and hate speech. As for dialect, we looked at MSA and four major dialects, namely Egyptian (EGY), Leventine (LEV), Maghrebi (MGR), and Gulf (GLF). Figure FIGREF14 shows that 71% of vulgar tweets were written in EGY followed by GLF, which accounted for 13% of vulgar tweets. MSA was not used in any of the vulgar tweets. As for offensive tweets in general, EGY and GLF were used in 36% and 35% of the offensive tweets respectively. Unlike the case of vulgar language where MSA was non-existent, 15% of the offensive tweets were in fact written in MSA. For hate speech, GLF and EGY were again dominant and MSA consistuted 21% of the tweets. This is consistent with findings for other languages such as English and Italian where vulgar language was more frequently associated with colloquial language BIBREF24, BIBREF25. Regarding the gender, Figure FIGREF15 shows that the vast majority of offensive tweets, including vulgar and hate speech, were authored by males. Female Twitter users accounted for 14% of offensive tweets in general and 6% and 9% of vulgar and hate speech respectively. Figure FIGREF16 shows a detailed categorization of hate speech types, where the top three include insulting groups based on their political ideology, origin, and sport affiliation. Religious hate speech appeared in only 15% of all hate speech tweets.",
"Next, we analyzed all tweets labeled as offensive to better understand how Arabic speakers use offensive language. Here is a breakdown of usage:",
"Direct name calling: The most frequent attack is to call a person an animal name, and the most used animals were كلب> (“klb” – “dog”), حمار> (“HmAr” – “donkey”), and بهيم> (“bhym” – “beast”). The second most common was insulting mental abilities using words such as غبي> (“gby” – “stupid”) and عبيط> (“EbyT” –“idiot”). Some culture-specific differences should be considered. Not all animal names are used as insults. For example, animals such as أسد> (“Asd” – “lion”), صقر> (“Sqr” – “falcon”), and غزال> (“gzAl” – “gazelle”) are typically used for praise. For other insults, people use: some bird names such as دجاجة> (“djAjp” – “chicken”), بومة> (“bwmp” – “owl”), and غراب> (“grAb” – “crow”); insects such as ذبابة> (“*bAbp” – “fly”), صرصور> (“SrSwr” – “cockroach”), and حشرة> (“H$rp” – “insect”); microorganisms such as جرثومة> (“jrvwmp” – “microbe”) and طحالب> (“THAlb” – “algae”); inanimate objects such as جزمة> (“jzmp” – “shoes”) and سطل> (“sTl” – “bucket”) among other usages.",
"Simile and metaphor: Users use simile and metaphor were they would compare a person to: an animal as in زي الثور> (“zy Alvwr” – “like a bull”), سمعني نهيقك> (“smEny nhyqk” – “let me hear your braying”), and هز ديلك> (“hz dylk” – “wag your tail”); a person with mental or physical disability such as منغولي> (“mngwly” – “Mongolian (down-syndrome)”), معوق> (“mEwq” – “disabled”), and قزم> (“qzm” – “dwarf”); and to the opposite gender such as جيش نوال> (“jy$ nwAl” – “Nawal's army (Nawal is female name)”) and نادي زيزي> (“nAdy zyzy” – “Zizi's club (Zizi is a female pet name)”).",
"Indirect speech: This type of offensive language includes: sarcasm such as أذكى إخواتك> (“A*kY AxwAtk” – “smartest one of your siblings”) and فيلسوف الحمير> (“fylswf AlHmyr” – “the donkeys' philosopher”); questions such as ايه كل الغباء ده> (“Ayh kl AlgbA dh” – “what is all this stupidity”); and indirect speech such as النقاش مع البهايم غير مثمر> (“AlnqA$ mE AlbhAym gyr mvmr” – “no use talking to cattle”).",
"Wishing Evil: This entails wishing death or major harm to befall someone such as ربنا ياخدك> (“rbnA yAxdk” – “May God take (kill) you”), الله يلعنك> (“Allh ylEnk” – “may Allah/God curse you”), and روح في داهية> (“rwH fy dAhyp” – equivalent to “go to hell”).",
"Name alteration: One common way to insult others is to change a letter or two in their names to produce new offensive words that rhyme with the original names. Some examples of such include changing الجزيرة> (“Aljzyrp” – “Aljazeera (channel)”) to الخنزيرة> (“Alxnzyrp” – “the pig”) and خلفان> (“xlfAn” – “Khalfan (person name)”) to خرفان> (“xrfAn” – “crazed”).",
"Societal stratification: Some insults are associated with: certain jobs such as بواب> (“bwAb” – “doorman”) or خادم> (“xAdm” – “servant”); and specific societal components such بدوي> (“bdwy” – “bedouin”) and فلاح> (“flAH” – “farmer”).",
"Immoral behavior: These insults are associated with negative moral traits or behaviors such as حقير> (“Hqyr” – “vile”), خاين> (“xAyn” – “traitor”), and منافق> (“mnAfq” – “hypocrite”).",
"Sexually related: They include expressions such as خول> (“xwl” – “gay”), وسخة> (“wsxp” – “prostitute”), and عرص> (“ErS” – “pimp”).",
"Figure FIGREF17 shows top words with the highest valance score for individual words in the offensive tweets. Larger fonts are used to highlight words with highest score and align as well with the categories mentioned ahead in the breakdown for the offensive languages. We modified the valence score described by BIBREF1 conover2011political to magnify its value based on frequency of occurrence. The score is computed as follows:",
"$V(I) = 2 \\frac{ \\frac{tf(I, C_off)}{total(C_off)}}{\\frac{tf(I, C_off)}{total(C_off)} + \\frac{tf(I, C_cln)}{total(C_cln)} } - 1$",
"where",
"$tf(I, C_i) = \\sum _{a \\in I \\bigcap C_i} [ln(Cnt(a, C_i)) + 1]$",
"$total(C_i) = \\sum _{I} tf(I, C_i)$",
"$Cnt(a, C_i)$ is the number of times word $a$ was used in offensive or clean tweets tweets $C_i$. In essence, we are replacing term frequencies with the natural log of the term frequencies."
],
[
"We conducted an extensive battery of experiments on the dataset to establish strong Arabic offensive language classification results. Though the offensive tweets have finer-grained labels where offensive tweet could also be vulgar or constitute hate speech, we conducted coarser-grained classification to determine if a tweet was offensive or not. For classification, we experimented with several tweet representation and classification models. For tweet representations, we used: the count of positive and negative terms, based on a polarity lexicon; static embeddings, namely fastText and Skip-Gram; and deep contextual embeddings, namely BERTbase-multilingual."
],
[
"We performed several text pre-processing steps. First, we tokenized the text using the Farasa Arabic NLP toolkit BIBREF20. Second, we removed URLs, numbers, and all tweet specific tokens, namely mentions, retweets, and hashtags as they are not part of the language semantic structure, and therefore, not usable in pre-trained embeddings. Third, we performed basic Arabic letter normalization, namely variants of the letter alef to bare alef, ta marbouta to ha, and alef maqsoura to ya. We also separated words that are commonly incorrectly attached such as ياكلب> (“yAklb” – “O dog”), is split to يا كلب> (“yA klb”). Lastly, we normalized letter repetitions to allow for a maximum of 2 repeated letters. For example, the token ههههه> (“hhhhh” – “ha ha ha ..”) is normalized to هه> (“hh”). We also removed the Arabic short-diacritics (tashkeel) and word elongation (kashida)."
],
[
"Since offensive words typically have a negative polarity, we wanted to test the effectiveness of using a polarity lexicon in detecting offensive tweets. For the lexicon, we used NileULex BIBREF26, which is an Arabic polarity lexicon containing 3,279 MSA and 2,674 Egyptian terms, out of which 4,256 are negative and 1,697 are positive. We used the counts of terms with positive polarity and terms with negative polarity in tweets as features."
],
[
"We experimented with various static embeddings that were pre-trained on different corpora with different vector dimensionality. We compared pre-trained embeddings to embeddings that were trained on our dataset. For pre-trained embeddings, we used: fastText Egyptian Arabic pre-trained embeddings BIBREF27 with vector dimensionality of 300; AraVec skip-gram embeddings BIBREF28, trained on 66.9M Arabic tweets with 100-dimensional vectors; and Mazajak skip-gram embeddings BIBREF29, trained on 250M Arabic tweets with 300-dimensional vectors.",
"Sentence embeddings were calculated by taking the mean of the embeddings of their tokens. The importance of testing a character level n-gram model like fastText lies in the agglutinative nature of the Arabic language. We trained a new fastText text classification model BIBREF30 on our dataset with vectors of 40 dimensions, 0.5 learning rate, 2$-$10 character n-grams as features, for 30 epochs. These hyper-parameters were tuned using a 5-fold cross-validated grid-search."
],
[
"We also experimented with pre-trained contextualized embeddings with fine-tuning for down-stream tasks. Recently, deep contextualized language models such as BERT (Bidirectional Encoder Representations from Transformers) BIBREF31, UMLFIT BIBREF32, and OpenAI GPT BIBREF33, to name but a few, have achieved ground-breaking results in many NLP classification and language understanding tasks. In this paper, we used BERTbase-multilingual (hereafter as simply BERT) fine-tuning method to classify Arabic offensive language on Twitter as it eliminates the need of heavily engineered task-specific architectures. Although Robustly Optimized BERT (RoBERTa) embeddings perform better than (BERTlarge) on GLUE BIBREF34, RACE BIBREF35, and SQuAD BIBREF36 tasks, pre-trained multilingual RoBERTa models are not available. BERT is pre-trained on Wikipedia text from 104 languages and comes with hundreds of millions of parameters. It contains an encoder with 12 Transformer blocks, hidden size of 768, and 12 self-attention heads. Though the training data for the BERT embeddings don't match our genre, these embedding use BP sub-word segments. Following devlin-2019-bert, the classification consists of introducing a dense layer over the final hidden state $h$ corresponding to first token of the sequence, [CLS], adding a softmax activation on the top of BERT to predict the probability of the $l$ label:",
"",
"",
"where $W$ is the task-specific weight matrix. During fine-tuning, all BERT parameters together with $W$ are optimized end-to-end to maximize the log-probability of the correct labels."
],
[
"We explored different classifiers. When using lexical features and pre-trained static embeddings, we primarily used an SVM classifier with a radial basis function kernel. Only when using the Mazajak embeddings, we experimented with other classifiers such as AdaBoost and Logistic regression. We did so to show that the SVM classifier was indeed the best of the bunch, and we picked the Mazajak embeddings because they yielded the best results among all static embeddings. We used the Scikit Learn implementations of all the classifiers such as libsvm for the SVM classifier. We also experimented with fastText, which trained embeddings on our data. When using contextualized embeddings, we fine-tuned BERT by adding a fully-connected dense layer followed by a softmax classifier, minimizing the binary cross-entropy loss function for the training data. For all experiments, we used the PyTorch implementation by HuggingFace as it provides pre-trained weights and vocabulary."
],
[
"For all of our experiments, we used 5-fold cross validation with identical folds for all experiments. Table TABREF27 reports on the results of using lexical features, static pre-trained embeddings with an SVM classifier, embeddings trained on our data with fastText classifier, and BERT over a dense layer with softmax activation. As the results show, using Mazajak/SVM yielded the best results overall with large improvements in precision over using BERT. We suspect that the Mazajak/SVM combination performed better than the BERT setup due to the fact that the Mazajak embeddings, though static, were trained on in-domain data, as opposed to BERT. Perhaps if BERT embeddings were trained on tweets, they might have outperformed all other setups. For completeness, we compared 7 other classifiers with SVM using Mazajak embeddings. As results in Table TABREF28 show, using SVM yielded the best results."
],
[
"We inspected the tweets of one fold that were misclassified by the Mazajak/SVM model (36 false positives/121 false negatives) to determine the most common errors."
],
[
"had four main types:",
"[leftmargin=*]",
"Gloating: ex. يا هبيده> (“yA hbydp” - “O you delusional”) referring to fans of rival sports team for thinking they could win.",
"Quoting: ex. لما حد يسب ويقول يا كلب> (“lmA Hd ysb wyqwl yA klb” – “when someone swears and says: O dog”).",
"Idioms: ex. يا فاطر رمضان يا خاسر دينك> (“yA fATr rmDAn yA xAsr dynk” – “o you who does not fast Ramadan, you who have lost your religion”), which is a colloquial idiom.",
"Implicit Sarcasm: ex. يا خاين انت عايز تشكك>",
"في حب الشعب للريس> (“yA KAyn Ant EAwz t$kk fy Hb Al$Eb llrys” – “O traitor, (you) want to question the love of the people to the president ”) where the author is mocking the president's popularity.",
""
],
[
"had two types:",
"[leftmargin=*]",
"Mixture of offensiveness and admiration: ex. calling a girl a puppy يا كلبوبة> (“yA klbwbp” – “O puppy”) in a flirtatious manner.",
"Implicit offensiveness: ex. calling for cure while implying sanity in: وتشفي حكام قطر من المرض> (“wt$fy HkAm qTr mn AlmrD” – “and cure Qatar rulers from illness”).",
"Many errors stem from heavy use of dialectal Arabic as well as ambiguity. Since BERT was trained on Wikipedia (MSA) and Google books, the model failed to classify tweets with dialectal cues. Conversely, Mazajak/SVM is more biased towards dialects, often failing to classify MSA tweets."
],
[
"In this paper we presented a systematic method for building an Arabic offensive language tweet dataset that does not favor specific dialects, topics, or genres. We developed detailed guidelines for tagging the tweets as clean or offensive, including special tags for vulgar tweets and hate speech. We tagged 10,000 tweets, which we plan to release publicly and would constitute the largest available Arabic offensive language dataset. We characterized the offensive tweets in the dataset to determine the topics that illicit such language, the dialects that are most often used, the common modes of offensiveness, and the gender distribution of their authors. We performed this breakdown for offensive tweets in general and for vulgar and hate speech tweets separately. We believe that this is the first detailed analysis of its kind. Lastly, we conducted a large battery of experiments on the dataset, using cross-validation, to establish a strong system for Arabic offensive language detection. We showed that using static embeddings produced a competitive results on the dataset.",
"For future work, we plan to pursue several directions. First, we want explore target specific offensive language, where attacks against an entity or a group may employ certain expressions that are only offensive within the context of that target and completely innocuous otherwise. Second, we plan to examine the effectiveness of cross dialectal and cross lingual learning of offensive language."
]
],
"section_name": [
"Introduction",
"Related Work",
"Data Collection ::: Collecting Arabic Offensive Tweets",
"Data Collection ::: Annotating Tweets",
"Data Collection ::: Annotating Tweets ::: OFFENSIVE (OFF):",
"Data Collection ::: Annotating Tweets ::: VULGAR (VLG):",
"Data Collection ::: Annotating Tweets ::: HATE SPEECH (HS):",
"Data Collection ::: Annotating Tweets ::: CLEAN (CLN):",
"Data Collection ::: Statistics and User Demographics",
"Experiments",
"Experiments ::: Data Pre-processing",
"Experiments ::: Representations ::: Lexical Features",
"Experiments ::: Representations ::: Static Embeddings",
"Experiments ::: Representations ::: Deep Contextualized Embeddings",
"Experiments ::: Classification Models",
"Experiments ::: Evaluation",
"Experiments ::: Error Analysis",
"Experiments ::: Error Analysis ::: False Positives",
"Experiments ::: Error Analysis ::: False Negatives",
"Conclusion and Future Work"
]
} | {
"answers": [
{
"annotation_id": [
"25abd39ac069858fdef5b48e73cfc56265c8d153",
"f39eb3fada5ade32f5912a5c9c330fa44dd2ebbb"
],
"answer": [
{
"evidence": [
"Next, we analyzed all tweets labeled as offensive to better understand how Arabic speakers use offensive language. Here is a breakdown of usage:",
"Direct name calling: The most frequent attack is to call a person an animal name, and the most used animals were كلب> (“klb” – “dog”), حمار> (“HmAr” – “donkey”), and بهيم> (“bhym” – “beast”). The second most common was insulting mental abilities using words such as غبي> (“gby” – “stupid”) and عبيط> (“EbyT” –“idiot”). Some culture-specific differences should be considered. Not all animal names are used as insults. For example, animals such as أسد> (“Asd” – “lion”), صقر> (“Sqr” – “falcon”), and غزال> (“gzAl” – “gazelle”) are typically used for praise. For other insults, people use: some bird names such as دجاجة> (“djAjp” – “chicken”), بومة> (“bwmp” – “owl”), and غراب> (“grAb” – “crow”); insects such as ذبابة> (“*bAbp” – “fly”), صرصور> (“SrSwr” – “cockroach”), and حشرة> (“H$rp” – “insect”); microorganisms such as جرثومة> (“jrvwmp” – “microbe”) and طحالب> (“THAlb” – “algae”); inanimate objects such as جزمة> (“jzmp” – “shoes”) and سطل> (“sTl” – “bucket”) among other usages.",
"Simile and metaphor: Users use simile and metaphor were they would compare a person to: an animal as in زي الثور> (“zy Alvwr” – “like a bull”), سمعني نهيقك> (“smEny nhyqk” – “let me hear your braying”), and هز ديلك> (“hz dylk” – “wag your tail”); a person with mental or physical disability such as منغولي> (“mngwly” – “Mongolian (down-syndrome)”), معوق> (“mEwq” – “disabled”), and قزم> (“qzm” – “dwarf”); and to the opposite gender such as جيش نوال> (“jy$ nwAl” – “Nawal's army (Nawal is female name)”) and نادي زيزي> (“nAdy zyzy” – “Zizi's club (Zizi is a female pet name)”).",
"Indirect speech: This type of offensive language includes: sarcasm such as أذكى إخواتك> (“A*kY AxwAtk” – “smartest one of your siblings”) and فيلسوف الحمير> (“fylswf AlHmyr” – “the donkeys' philosopher”); questions such as ايه كل الغباء ده> (“Ayh kl AlgbA dh” – “what is all this stupidity”); and indirect speech such as النقاش مع البهايم غير مثمر> (“AlnqA$ mE AlbhAym gyr mvmr” – “no use talking to cattle”).",
"Wishing Evil: This entails wishing death or major harm to befall someone such as ربنا ياخدك> (“rbnA yAxdk” – “May God take (kill) you”), الله يلعنك> (“Allh ylEnk” – “may Allah/God curse you”), and روح في داهية> (“rwH fy dAhyp” – equivalent to “go to hell”).",
"Name alteration: One common way to insult others is to change a letter or two in their names to produce new offensive words that rhyme with the original names. Some examples of such include changing الجزيرة> (“Aljzyrp” – “Aljazeera (channel)”) to الخنزيرة> (“Alxnzyrp” – “the pig”) and خلفان> (“xlfAn” – “Khalfan (person name)”) to خرفان> (“xrfAn” – “crazed”).",
"Societal stratification: Some insults are associated with: certain jobs such as بواب> (“bwAb” – “doorman”) or خادم> (“xAdm” – “servant”); and specific societal components such بدوي> (“bdwy” – “bedouin”) and فلاح> (“flAH” – “farmer”).",
"Immoral behavior: These insults are associated with negative moral traits or behaviors such as حقير> (“Hqyr” – “vile”), خاين> (“xAyn” – “traitor”), and منافق> (“mnAfq” – “hypocrite”).",
"Sexually related: They include expressions such as خول> (“xwl” – “gay”), وسخة> (“wsxp” – “prostitute”), and عرص> (“ErS” – “pimp”)."
],
"extractive_spans": [],
"free_form_answer": "Frequent use of direct animal name calling, using simile and metaphors, through indirect speech like sarcasm, wishing evil to others, name alteration, societal stratification, immoral behavior and sexually related uses.",
"highlighted_evidence": [
"Next, we analyzed all tweets labeled as offensive to better understand how Arabic speakers use offensive language. Here is a breakdown of usage:\n\nDirect name calling: The most frequent attack is to call a person an animal name, and the most used animals were كلب> (“klb” – “dog”), حمار> (“HmAr” – “donkey”), and بهيم> (“bhym” – “beast”). The second most common was insulting mental abilities using words such as غبي> (“gby” – “stupid”) and عبيط> (“EbyT” –“idiot”). Some culture-specific differences should be considered. Not all animal names are used as insults. For example, animals such as أسد> (“Asd” – “lion”), صقر> (“Sqr” – “falcon”), and غزال> (“gzAl” – “gazelle”) are typically used for praise. For other insults, people use: some bird names such as دجاجة> (“djAjp” – “chicken”), بومة> (“bwmp” – “owl”), and غراب> (“grAb” – “crow”); insects such as ذبابة> (“*bAbp” – “fly”), صرصور> (“SrSwr” – “cockroach”), and حشرة> (“H$rp” – “insect”); microorganisms such as جرثومة> (“jrvwmp” – “microbe”) and طحالب> (“THAlb” – “algae”); inanimate objects such as جزمة> (“jzmp” – “shoes”) and سطل> (“sTl” – “bucket”) among other usages.\n\nSimile and metaphor: Users use simile and metaphor were they would compare a person to: an animal as in زي الثور> (“zy Alvwr” – “like a bull”), سمعني نهيقك> (“smEny nhyqk” – “let me hear your braying”), and هز ديلك> (“hz dylk” – “wag your tail”); a person with mental or physical disability such as منغولي> (“mngwly” – “Mongolian (down-syndrome)”), معوق> (“mEwq” – “disabled”), and قزم> (“qzm” – “dwarf”); and to the opposite gender such as جيش نوال> (“jy$ nwAl” – “Nawal's army (Nawal is female name)”) and نادي زيزي> (“nAdy zyzy” – “Zizi's club (Zizi is a female pet name)”).\n\nIndirect speech: This type of offensive language includes: sarcasm such as أذكى إخواتك> (“A*kY AxwAtk” – “smartest one of your siblings”) and فيلسوف الحمير> (“fylswf AlHmyr” – “the donkeys' philosopher”); questions such as ايه كل الغباء ده> (“Ayh kl AlgbA dh” – “what is all this stupidity”); and indirect speech such as النقاش مع البهايم غير مثمر> (“AlnqA$ mE AlbhAym gyr mvmr” – “no use talking to cattle”).\n\nWishing Evil: This entails wishing death or major harm to befall someone such as ربنا ياخدك> (“rbnA yAxdk” – “May God take (kill) you”), الله يلعنك> (“Allh ylEnk” – “may Allah/God curse you”), and روح في داهية> (“rwH fy dAhyp” – equivalent to “go to hell”).\n\nName alteration: One common way to insult others is to change a letter or two in their names to produce new offensive words that rhyme with the original names. Some examples of such include changing الجزيرة> (“Aljzyrp” – “Aljazeera (channel)”) to الخنزيرة> (“Alxnzyrp” – “the pig”) and خلفان> (“xlfAn” – “Khalfan (person name)”) to خرفان> (“xrfAn” – “crazed”).\n\nSocietal stratification: Some insults are associated with: certain jobs such as بواب> (“bwAb” – “doorman”) or خادم> (“xAdm” – “servant”); and specific societal components such بدوي> (“bdwy” – “bedouin”) and فلاح> (“flAH” – “farmer”).\n\nImmoral behavior: These insults are associated with negative moral traits or behaviors such as حقير> (“Hqyr” – “vile”), خاين> (“xAyn” – “traitor”), and منافق> (“mnAfq” – “hypocrite”).\n\nSexually related: They include expressions such as خول> (“xwl” – “gay”), وسخة> (“wsxp” – “prostitute”), and عرص> (“ErS” – “pimp”)."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Next, we analyzed all tweets labeled as offensive to better understand how Arabic speakers use offensive language. Here is a breakdown of usage:",
"Direct name calling: The most frequent attack is to call a person an animal name, and the most used animals were كلب> (“klb” – “dog”), حمار> (“HmAr” – “donkey”), and بهيم> (“bhym” – “beast”). The second most common was insulting mental abilities using words such as غبي> (“gby” – “stupid”) and عبيط> (“EbyT” –“idiot”). Some culture-specific differences should be considered. Not all animal names are used as insults. For example, animals such as أسد> (“Asd” – “lion”), صقر> (“Sqr” – “falcon”), and غزال> (“gzAl” – “gazelle”) are typically used for praise. For other insults, people use: some bird names such as دجاجة> (“djAjp” – “chicken”), بومة> (“bwmp” – “owl”), and غراب> (“grAb” – “crow”); insects such as ذبابة> (“*bAbp” – “fly”), صرصور> (“SrSwr” – “cockroach”), and حشرة> (“H$rp” – “insect”); microorganisms such as جرثومة> (“jrvwmp” – “microbe”) and طحالب> (“THAlb” – “algae”); inanimate objects such as جزمة> (“jzmp” – “shoes”) and سطل> (“sTl” – “bucket”) among other usages.",
"Simile and metaphor: Users use simile and metaphor were they would compare a person to: an animal as in زي الثور> (“zy Alvwr” – “like a bull”), سمعني نهيقك> (“smEny nhyqk” – “let me hear your braying”), and هز ديلك> (“hz dylk” – “wag your tail”); a person with mental or physical disability such as منغولي> (“mngwly” – “Mongolian (down-syndrome)”), معوق> (“mEwq” – “disabled”), and قزم> (“qzm” – “dwarf”); and to the opposite gender such as جيش نوال> (“jy$ nwAl” – “Nawal's army (Nawal is female name)”) and نادي زيزي> (“nAdy zyzy” – “Zizi's club (Zizi is a female pet name)”).",
"Indirect speech: This type of offensive language includes: sarcasm such as أذكى إخواتك> (“A*kY AxwAtk” – “smartest one of your siblings”) and فيلسوف الحمير> (“fylswf AlHmyr” – “the donkeys' philosopher”); questions such as ايه كل الغباء ده> (“Ayh kl AlgbA dh” – “what is all this stupidity”); and indirect speech such as النقاش مع البهايم غير مثمر> (“AlnqA$ mE AlbhAym gyr mvmr” – “no use talking to cattle”).",
"Wishing Evil: This entails wishing death or major harm to befall someone such as ربنا ياخدك> (“rbnA yAxdk” – “May God take (kill) you”), الله يلعنك> (“Allh ylEnk” – “may Allah/God curse you”), and روح في داهية> (“rwH fy dAhyp” – equivalent to “go to hell”).",
"Name alteration: One common way to insult others is to change a letter or two in their names to produce new offensive words that rhyme with the original names. Some examples of such include changing الجزيرة> (“Aljzyrp” – “Aljazeera (channel)”) to الخنزيرة> (“Alxnzyrp” – “the pig”) and خلفان> (“xlfAn” – “Khalfan (person name)”) to خرفان> (“xrfAn” – “crazed”).",
"Societal stratification: Some insults are associated with: certain jobs such as بواب> (“bwAb” – “doorman”) or خادم> (“xAdm” – “servant”); and specific societal components such بدوي> (“bdwy” – “bedouin”) and فلاح> (“flAH” – “farmer”).",
"Immoral behavior: These insults are associated with negative moral traits or behaviors such as حقير> (“Hqyr” – “vile”), خاين> (“xAyn” – “traitor”), and منافق> (“mnAfq” – “hypocrite”).",
"Sexually related: They include expressions such as خول> (“xwl” – “gay”), وسخة> (“wsxp” – “prostitute”), and عرص> (“ErS” – “pimp”)."
],
"extractive_spans": [
"Direct name calling",
"Simile and metaphor",
"Indirect speech",
"Wishing Evil",
"Name alteration",
"Societal stratification",
"Immoral behavior",
"Sexually related"
],
"free_form_answer": "",
"highlighted_evidence": [
"Next, we analyzed all tweets labeled as offensive to better understand how Arabic speakers use offensive language. Here is a breakdown of usage:\n\nDirect name calling: The most frequent attack is to call a person an animal name, and the most used animals were كلب> (“klb” – “dog”), حمار> (“HmAr” – “donkey”), and بهيم> (“bhym” – “beast”).",
"Simile and metaphor: Users use simile and metaphor were they would compare a person to: an animal as in زي الثور> (“zy Alvwr” – “like a bull”), سمعني نهيقك> (“smEny nhyqk” – “let me hear your braying”), and هز ديلك> (“hz dylk” – “wag your tail”); a person with mental or physical disability such as منغولي> (“mngwly” – “Mongolian (down-syndrome)”), معوق> (“mEwq” – “disabled”), and قزم> (“qzm” – “dwarf”); and to the opposite gender such as جيش نوال> (“jy$ nwAl” – “Nawal's army (Nawal is female name)”) and نادي زيزي> (“nAdy zyzy” – “Zizi's club (Zizi is a female pet name)”).",
"Indirect speech: This type of offensive language includes: sarcasm such as أذكى إخواتك> (“A*kY AxwAtk” – “smartest one of your siblings”) and فيلسوف الحمير> (“fylswf AlHmyr” – “the donkeys' philosopher”); questions such as ايه كل الغباء ده> (“Ayh kl AlgbA dh” – “what is all this stupidity”); and indirect speech such as النقاش مع البهايم غير مثمر> (“AlnqA$ mE AlbhAym gyr mvmr” – “no use talking to cattle”).",
"Wishing Evil: This entails wishing death or major harm to befall someone such as ربنا ياخدك> (“rbnA yAxdk” – “May God take (kill) you”), الله يلعنك> (“Allh ylEnk” – “may Allah/God curse you”), and روح في داهية> (“rwH fy dAhyp” – equivalent to “go to hell”).",
"Name alteration: One common way to insult others is to change a letter or two in their names to produce new offensive words that rhyme with the original names.",
"Societal stratification: Some insults are associated with: certain jobs such as بواب> (“bwAb” – “doorman”) or خادم> (“xAdm” – “servant”); and specific societal components such بدوي> (“bdwy” – “bedouin”) and فلاح> (“flAH” – “farmer”).",
"Immoral behavior: These insults are associated with negative moral traits or behaviors such as حقير> (“Hqyr” – “vile”), خاين> (“xAyn” – “traitor”), and منافق> (“mnAfq” – “hypocrite”).\n\nSexually related: They include expressions such as خول> (“xwl” – “gay”), وسخة> (“wsxp” – “prostitute”), and عرص> (“ErS” – “pimp”)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"1ebd58c0b9762e02db554d6baaccd40887e1f476"
],
"answer": [
{
"evidence": [
"Given the annotated tweets, we wanted to ascertain the distribution of: types of offensive language, genres where it is used, the dialects used, and the gender of users using such language.",
"Figure FIGREF13 shows the distribution of topics associated with offensive tweets. As the figure shows, sports and politics are most dominant for offensive language including vulgar and hate speech. As for dialect, we looked at MSA and four major dialects, namely Egyptian (EGY), Leventine (LEV), Maghrebi (MGR), and Gulf (GLF). Figure FIGREF14 shows that 71% of vulgar tweets were written in EGY followed by GLF, which accounted for 13% of vulgar tweets. MSA was not used in any of the vulgar tweets. As for offensive tweets in general, EGY and GLF were used in 36% and 35% of the offensive tweets respectively. Unlike the case of vulgar language where MSA was non-existent, 15% of the offensive tweets were in fact written in MSA. For hate speech, GLF and EGY were again dominant and MSA consistuted 21% of the tweets. This is consistent with findings for other languages such as English and Italian where vulgar language was more frequently associated with colloquial language BIBREF24, BIBREF25. Regarding the gender, Figure FIGREF15 shows that the vast majority of offensive tweets, including vulgar and hate speech, were authored by males. Female Twitter users accounted for 14% of offensive tweets in general and 6% and 9% of vulgar and hate speech respectively. Figure FIGREF16 shows a detailed categorization of hate speech types, where the top three include insulting groups based on their political ideology, origin, and sport affiliation. Religious hate speech appeared in only 15% of all hate speech tweets."
],
"extractive_spans": [
"ascertain the distribution of: types of offensive language, genres where it is used, the dialects used, and the gender of users using such language"
],
"free_form_answer": "",
"highlighted_evidence": [
"Given the annotated tweets, we wanted to ascertain the distribution of: types of offensive language, genres where it is used, the dialects used, and the gender of users using such language.",
"As the figure shows, sports and politics are most dominant for offensive language including vulgar and hate speech. As for dialect, we looked at MSA and four major dialects, namely Egyptian (EGY), Leventine (LEV), Maghrebi (MGR), and Gulf (GLF). Figure FIGREF14 shows that 71% of vulgar tweets were written in EGY followed by GLF, which accounted for 13% of vulgar tweets. MSA was not used in any of the vulgar tweets. As for offensive tweets in general, EGY and GLF were used in 36% and 35% of the offensive tweets respectively. Unlike the case of vulgar language where MSA was non-existent, 15% of the offensive tweets were in fact written in MSA. For hate speech, GLF and EGY were again dominant and MSA consistuted 21% of the tweets. This is consistent with findings for other languages such as English and Italian where vulgar language was more frequently associated with colloquial language BIBREF24, BIBREF25. Regarding the gender, Figure FIGREF15 shows that the vast majority of offensive tweets, including vulgar and hate speech, were authored by males. Female Twitter users accounted for 14% of offensive tweets in general and 6% and 9% of vulgar and hate speech respectively. Figure FIGREF16 shows a detailed categorization of hate speech types, where the top three include insulting groups based on their political ideology, origin, and sport affiliation. Religious hate speech appeared in only 15% of all hate speech tweets."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"5220a2f038b8c56d9e96fc5b2615f1c1c9cd2c32",
"f6b3f346e5d8ab3b918095325173c2885a0569b3"
],
"answer": [
{
"evidence": [
"We developed the annotation guidelines jointly with an experienced annotator, who is a native Arabic speaker with a good knowledge of various Arabic dialects. We made sure that our guidelines were compatible with those of OffensEval2019. The annotator carried out all annotation. Tweets were given one or more of the following four labels: offensive, vulgar, hate speech, or clean. Since the offensive label covers both vulgar and hate speech and vulgarity and hate speech are not mutually exclusive, a tweet can be just offensive or offensive and vulgar and/or hate speech. The annotation adhered to the following guidelines:"
],
"extractive_spans": [],
"free_form_answer": "One",
"highlighted_evidence": [
"The annotator carried out all annotation."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We developed the annotation guidelines jointly with an experienced annotator, who is a native Arabic speaker with a good knowledge of various Arabic dialects. We made sure that our guidelines were compatible with those of OffensEval2019. The annotator carried out all annotation. Tweets were given one or more of the following four labels: offensive, vulgar, hate speech, or clean. Since the offensive label covers both vulgar and hate speech and vulgarity and hate speech are not mutually exclusive, a tweet can be just offensive or offensive and vulgar and/or hate speech. The annotation adhered to the following guidelines:"
],
"extractive_spans": [],
"free_form_answer": "One experienced annotator tagged all tweets",
"highlighted_evidence": [
"We developed the annotation guidelines jointly with an experienced annotator, who is a native Arabic speaker with a good knowledge of various Arabic dialects. We made sure that our guidelines were compatible with those of OffensEval2019. The annotator carried out all annotation. Tweets were given one or more of the following four labels: offensive, vulgar, hate speech, or clean. Since the offensive label covers both vulgar and hate speech and vulgarity and hate speech are not mutually exclusive, a tweet can be just offensive or offensive and vulgar and/or hate speech. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"56a4389da5bf4c1857e5aeb2bdfe499891e462c3",
"f244494f659f5a87779d72272f8f4dbade0d3a99"
],
"answer": [
{
"evidence": [
"Disclaimer: Due to the nature of the paper, some examples contain highly offensive language and hate speech. They don't reflect the views of the authors in any way, and the point of the paper is to help fight such speech. Much recent interest has focused on the detection of offensive language and hate speech in online social media. Such language is often associated with undesirable online behaviors such as trolling, cyberbullying, online extremism, political polarization, and propaganda. Thus, offensive language detection is instrumental for a variety of application such as: quantifying polarization BIBREF0, BIBREF1, trolls and propaganda account detection BIBREF2, detecting the likelihood of hate crimes BIBREF3; and predicting conflict BIBREF4. In this paper, we describe our methodology for building a large dataset of Arabic offensive tweets. Given that roughly 1-2% of all Arabic tweets are offensive BIBREF5, targeted annotation is essential for efficiently building a large dataset. Since our methodology does not use a seed list of offensive words, it is not biased by topic, target, or dialect. Using our methodology, we tagged 10,000 Arabic tweet dataset for offensiveness, where offensive tweets account for roughly 19% of the tweets. Further, we labeled tweets as vulgar or hate speech. To date, this is the largest available dataset, which we plan to make publicly available along with annotation guidelines. We use this dataset to characterize Arabic offensive language to ascertain the topics, dialects, and users' gender that are most associated with the use of offensive language. Though we suspect that there are common features that span different languages and cultures, some characteristics of Arabic offensive language is language and culture specific. Thus, we conduct a thorough analysis of how Arabic users use offensive language. Next, we use the dataset to train strong Arabic offensive language classifiers using state-of-the-art representations and classification techniques. Specifically, we experiment with static and contextualized embeddings for representation along with a variety of classifiers such as a deep neural network classifier and Support Vector Machine (SVM)."
],
"extractive_spans": [
"10,000 Arabic tweet dataset "
],
"free_form_answer": "",
"highlighted_evidence": [
"Using our methodology, we tagged 10,000 Arabic tweet dataset for offensiveness, where offensive tweets account for roughly 19% of the tweets. Further, we labeled tweets as vulgar or hate speech. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Disclaimer: Due to the nature of the paper, some examples contain highly offensive language and hate speech. They don't reflect the views of the authors in any way, and the point of the paper is to help fight such speech. Much recent interest has focused on the detection of offensive language and hate speech in online social media. Such language is often associated with undesirable online behaviors such as trolling, cyberbullying, online extremism, political polarization, and propaganda. Thus, offensive language detection is instrumental for a variety of application such as: quantifying polarization BIBREF0, BIBREF1, trolls and propaganda account detection BIBREF2, detecting the likelihood of hate crimes BIBREF3; and predicting conflict BIBREF4. In this paper, we describe our methodology for building a large dataset of Arabic offensive tweets. Given that roughly 1-2% of all Arabic tweets are offensive BIBREF5, targeted annotation is essential for efficiently building a large dataset. Since our methodology does not use a seed list of offensive words, it is not biased by topic, target, or dialect. Using our methodology, we tagged 10,000 Arabic tweet dataset for offensiveness, where offensive tweets account for roughly 19% of the tweets. Further, we labeled tweets as vulgar or hate speech. To date, this is the largest available dataset, which we plan to make publicly available along with annotation guidelines. We use this dataset to characterize Arabic offensive language to ascertain the topics, dialects, and users' gender that are most associated with the use of offensive language. Though we suspect that there are common features that span different languages and cultures, some characteristics of Arabic offensive language is language and culture specific. Thus, we conduct a thorough analysis of how Arabic users use offensive language. Next, we use the dataset to train strong Arabic offensive language classifiers using state-of-the-art representations and classification techniques. Specifically, we experiment with static and contextualized embeddings for representation along with a variety of classifiers such as a deep neural network classifier and Support Vector Machine (SVM)."
],
"extractive_spans": [
"10,000"
],
"free_form_answer": "",
"highlighted_evidence": [
" Using our methodology, we tagged 10,000 Arabic tweet dataset for offensiveness, where offensive tweets account for roughly 19% of the tweets."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"2c2d608a0c0dbc8ce6fde9171c8a960697f06ca2",
"58ade49d7556524503926e0e38f00c2222379c3f"
],
"answer": [
{
"evidence": [
"Disclaimer: Due to the nature of the paper, some examples contain highly offensive language and hate speech. They don't reflect the views of the authors in any way, and the point of the paper is to help fight such speech. Much recent interest has focused on the detection of offensive language and hate speech in online social media. Such language is often associated with undesirable online behaviors such as trolling, cyberbullying, online extremism, political polarization, and propaganda. Thus, offensive language detection is instrumental for a variety of application such as: quantifying polarization BIBREF0, BIBREF1, trolls and propaganda account detection BIBREF2, detecting the likelihood of hate crimes BIBREF3; and predicting conflict BIBREF4. In this paper, we describe our methodology for building a large dataset of Arabic offensive tweets. Given that roughly 1-2% of all Arabic tweets are offensive BIBREF5, targeted annotation is essential for efficiently building a large dataset. Since our methodology does not use a seed list of offensive words, it is not biased by topic, target, or dialect. Using our methodology, we tagged 10,000 Arabic tweet dataset for offensiveness, where offensive tweets account for roughly 19% of the tweets. Further, we labeled tweets as vulgar or hate speech. To date, this is the largest available dataset, which we plan to make publicly available along with annotation guidelines. We use this dataset to characterize Arabic offensive language to ascertain the topics, dialects, and users' gender that are most associated with the use of offensive language. Though we suspect that there are common features that span different languages and cultures, some characteristics of Arabic offensive language is language and culture specific. Thus, we conduct a thorough analysis of how Arabic users use offensive language. Next, we use the dataset to train strong Arabic offensive language classifiers using state-of-the-art representations and classification techniques. Specifically, we experiment with static and contextualized embeddings for representation along with a variety of classifiers such as a deep neural network classifier and Support Vector Machine (SVM)."
],
"extractive_spans": [],
"free_form_answer": "It does not use a seed list to gather tweets so the dataset does not skew to specific topics, dialect, targets.",
"highlighted_evidence": [
"Since our methodology does not use a seed list of offensive words, it is not biased by topic, target, or dialect. Using our methodology, we tagged 10,000 Arabic tweet dataset for offensiveness, where offensive tweets account for roughly 19% of the tweets. Further, we labeled tweets as vulgar or hate speech. To date, this is the largest available dataset, which we plan to make publicly available along with annotation guidelines. We use this dataset to characterize Arabic offensive language to ascertain the topics, dialects, and users' gender that are most associated with the use of offensive language. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Disclaimer: Due to the nature of the paper, some examples contain highly offensive language and hate speech. They don't reflect the views of the authors in any way, and the point of the paper is to help fight such speech. Much recent interest has focused on the detection of offensive language and hate speech in online social media. Such language is often associated with undesirable online behaviors such as trolling, cyberbullying, online extremism, political polarization, and propaganda. Thus, offensive language detection is instrumental for a variety of application such as: quantifying polarization BIBREF0, BIBREF1, trolls and propaganda account detection BIBREF2, detecting the likelihood of hate crimes BIBREF3; and predicting conflict BIBREF4. In this paper, we describe our methodology for building a large dataset of Arabic offensive tweets. Given that roughly 1-2% of all Arabic tweets are offensive BIBREF5, targeted annotation is essential for efficiently building a large dataset. Since our methodology does not use a seed list of offensive words, it is not biased by topic, target, or dialect. Using our methodology, we tagged 10,000 Arabic tweet dataset for offensiveness, where offensive tweets account for roughly 19% of the tweets. Further, we labeled tweets as vulgar or hate speech. To date, this is the largest available dataset, which we plan to make publicly available along with annotation guidelines. We use this dataset to characterize Arabic offensive language to ascertain the topics, dialects, and users' gender that are most associated with the use of offensive language. Though we suspect that there are common features that span different languages and cultures, some characteristics of Arabic offensive language is language and culture specific. Thus, we conduct a thorough analysis of how Arabic users use offensive language. Next, we use the dataset to train strong Arabic offensive language classifiers using state-of-the-art representations and classification techniques. Specifically, we experiment with static and contextualized embeddings for representation along with a variety of classifiers such as a deep neural network classifier and Support Vector Machine (SVM)."
],
"extractive_spans": [
"our methodology does not use a seed list of offensive words"
],
"free_form_answer": "",
"highlighted_evidence": [
"Since our methodology does not use a seed list of offensive words, it is not biased by topic, target, or dialect."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no"
],
"question": [
"What are the distinctive characteristics of how Arabic speakers use offensive language?",
"How did they analyze which topics, dialects and gender are most associated with tweets?",
"How many annotators tagged each tweet?",
"How many tweets are in the dataset?",
"In what way is the offensive dataset not biased by topic, dialect or target?"
],
"question_id": [
"55507f066073b29c1736b684c09c045064053ba9",
"e838275bb0673fba0d67ac00e4307944a2c17be3",
"8dda1ef371933811e2a25a286529c31623cca0c6",
"b3de9357c569fb1454be8f2ac5fcecaea295b967",
"59e58c6fc63cf5b54b632462465bfbd85b1bf3dd"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"search_query": [
"twitter",
"twitter",
"twitter",
"twitter",
"twitter"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Table 1: Distribution of offensive and clean tweets.",
"Figure 1: Topic distribution for offensive language and its sub-categories",
"Figure 3: Gender distribution for offensive language and its sub-categories",
"Figure 2: Dialect distribution for offensive language and its sub-categories",
"Figure 4: Distribution of Hate Speech Types. Note: A tweet may have more than one type.",
"Figure 5: Tag cloud for words with top valence score among offensive class",
"Table 2: Classification performance with different features and models.",
"Table 3: Performance of different classification models on Mazajak embeddings."
],
"file": [
"4-Table1-1.png",
"5-Figure1-1.png",
"5-Figure3-1.png",
"5-Figure2-1.png",
"5-Figure4-1.png",
"6-Figure5-1.png",
"8-Table2-1.png",
"8-Table3-1.png"
]
} | [
"What are the distinctive characteristics of how Arabic speakers use offensive language?",
"How many annotators tagged each tweet?",
"In what way is the offensive dataset not biased by topic, dialect or target?"
] | [
[
"2004.02192-Data Collection ::: Statistics and User Demographics-5",
"2004.02192-Data Collection ::: Statistics and User Demographics-8",
"2004.02192-Data Collection ::: Statistics and User Demographics-10",
"2004.02192-Data Collection ::: Statistics and User Demographics-2",
"2004.02192-Data Collection ::: Statistics and User Demographics-9",
"2004.02192-Data Collection ::: Statistics and User Demographics-4",
"2004.02192-Data Collection ::: Statistics and User Demographics-7",
"2004.02192-Data Collection ::: Statistics and User Demographics-6",
"2004.02192-Data Collection ::: Statistics and User Demographics-3"
],
[
"2004.02192-Data Collection ::: Annotating Tweets-0"
],
[
"2004.02192-Introduction-0"
]
] | [
"Frequent use of direct animal name calling, using simile and metaphors, through indirect speech like sarcasm, wishing evil to others, name alteration, societal stratification, immoral behavior and sexually related uses.",
"One experienced annotator tagged all tweets",
"It does not use a seed list to gather tweets so the dataset does not skew to specific topics, dialect, targets."
] | 132 |
1909.06200 | A Neural Approach to Irony Generation | Ironies can not only express stronger emotions but also show a sense of humor. With the development of social media, ironies are widely used in public. Although many prior research studies have been conducted in irony detection, few studies focus on irony generation. The main challenges for irony generation are the lack of large-scale irony dataset and difficulties in modeling the ironic pattern. In this work, we first systematically define irony generation based on style transfer task. To address the lack of data, we make use of twitter and build a large-scale dataset. We also design a combination of rewards for reinforcement learning to control the generation of ironic sentences. Experimental results demonstrate the effectiveness of our model in terms of irony accuracy, sentiment preservation, and content preservation. | {
"paragraphs": [
[
"The irony is a kind of figurative language, which is widely used on social media BIBREF0 . The irony is defined as a clash between the intended meaning of a sentence and its literal meaning BIBREF1 . As an important aspect of language, irony plays an essential role in sentiment analysis BIBREF2 , BIBREF0 and opinion mining BIBREF3 , BIBREF4 .",
"Although some previous studies focus on irony detection, little attention is paid to irony generation. As ironies can strengthen sentiments and express stronger emotions, we mainly focus on generating ironic sentences. Given a non-ironic sentence, we implement a neural network to transfer it to an ironic sentence and constrain the sentiment polarity of the two sentences to be the same. For example, the input is “I hate it when my plans get ruined\" which is negative in sentiment polarity and the output should be ironic and negative in sentiment as well, such as “I like it when my plans get ruined\". The speaker uses “like\" to be ironic and express his or her negative sentiment. At the same time, our model can preserve contents which are irrelevant to sentiment polarity and irony. According to the categories mentioned in BIBREF5 , irony can be classified into 3 classes: verbal irony by means of a polarity contrast, the sentences containing expression whose polarity is inverted between the intended and the literal evaluation; other types of verbal irony, the sentences that show no polarity contrast between the literal and intended meaning but are still ironic; and situational irony, the sentences that describe situations that fail to meet some expectations. As ironies in the latter two categories are obscure and hard to understand, we decide to only focus on ironies in the first category in this work. For example, our work can be specifically described as: given a sentence “I hate to be ignored\", we train our model to generate an ironic sentence such as “I love to be ignored\". Although there is “love\" in the generated sentence, the speaker still expresses his or her negative sentiment by irony. We also make some explorations in the transformation from ironic sentences to non-ironic sentences at the end of our work. Because of the lack of previous work and baselines on irony generation, we implement our model based on style transfer. Our work will not only provide the first large-scale irony dataset but also make our model as a benchmark for the irony generation.",
"Recently, unsupervised style transfer becomes a very popular topic. Many state-of-the-art studies try to solve the task with sequence-to-sequence (seq2seq) framework. There are three main ways to build up models. The first is to learn a latent style-independent content representation and generate sentences with the content representation and another style BIBREF6 , BIBREF7 . The second is to directly transfer sentences from one style to another under the control of classifiers and reinforcement learning BIBREF8 . The third is to remove style attribute words from the input sentence and combine the remaining content with new style attribute words BIBREF9 , BIBREF10 . The first method usually obtains better performances via adversarial training with discriminators. The style-independent content representation, nevertheless, is not easily obtained BIBREF11 , which results in poor performances. The second method is suitable for complex styles which are difficult to model and describe. The model can learn the deep semantic features by itself but sometimes the model is sensitive to parameters and hard to train. The third method succeeds to preserve content but cannot work for some complex styles such as democratic and republican. Sentences with those styles usually do not have specific style attribute words. Unfortunately, due to the lack of large irony dataset and difficulties of modeling ironies, there has been little work trying to generate ironies based on seq2seq framework as far as we know. Inspired by methods for style transfer, we decide to implement a specifically designed model based on unsupervised style transfer to explore irony generation.",
"In this paper, in order to address the lack of irony data, we first crawl over 2M tweets from twitter to build a dataset with 262,755 ironic and 112,330 non-ironic tweets. Then, due to the lack of parallel data, we propose a novel model to transfer non-ironic sentences to ironic sentences in an unsupervised way. As ironic style is hard to model and describe, we implement our model with the control of classifiers and reinforcement learning. Different from other studies in style transfer, the transformation from non-ironic to ironic sentences has to preserve sentiment polarity as mentioned above. Therefore, we not only design an irony reward to control the irony accuracy and implement denoising auto-encoder and back-translation to control content preservation but also design a sentiment reward to control sentiment preservation.",
"Experimental results demonstrate that our model achieves a high irony accuracy with well-preserved sentiment and content. The contributions of our work are as follows:"
],
[
"Style Transfer: As irony is a complicated style and hard to model with some specific style attribute words, we mainly focus on studies without editing style attribute words.",
"Some studies are trying to disentangle style representation from content representation. In BIBREF12 , authors leverage adversarial networks to learn separate content representations and style representations. In BIBREF13 and BIBREF6 , researchers combine variational auto-encoders (VAEs) with style discriminators.",
"However, some recent studies BIBREF11 reveal that the disentanglement of content and style representations may not be achieved in practice. Therefore, some other research studies BIBREF9 , BIBREF10 strive to separate content and style by removing stylistic words. Nonetheless, many non-ironic sentences do not have specific stylistic words and as a result, we find it difficult to transfer non-ironic sentences to ironic sentences through this way in practice.",
"Besides, some other research studies do not disentangle style from content but directly learn representations of sentences. In BIBREF8 , authors propose a dual reinforcement learning framework without separating content and style representations. In BIBREF7 , researchers utilize a machine translation model to learn a sentence representation preserving the meaning of the sentence but reducing stylistic properties. In this method, the quality of generated sentences relies on the performance of classifiers to a large extent. Meanwhile, such models are usually sensitive to parameters and difficult to train. In contrast, we combine a pre-training process with reinforcement learning to build up a stable language model and design special rewards for our task.",
"Irony Detection: With the development of social media, irony detection becomes a more important task. Methods for irony detection can be mainly divided into two categories: methods based on feature engineering and methods based on neural networks.",
"As for methods based on feature engineering, In BIBREF1 , authors investigate pragmatic phenomena and various irony markers. In BIBREF14 , researchers leverage a combination of sentiment, distributional semantic and text surface features. Those models rely on hand-crafted features and are hard to implement.",
"When it comes to methods based on neural networks, long short-term memory (LSTM) BIBREF15 network is widely used and is very efficient for irony detection. In BIBREF16 , a tweet is divided into two segments and a subtract layer is implemented to calculate the difference between two segments in order to determine whether the tweet is ironic. In BIBREF17 , authors utilize a recurrent neural network with Bi-LSTM and self-attention without hand-crafted features. In BIBREF18 , researchers propose a system based on a densely connected LSTM network."
],
[
"In this section, we describe how we build our dataset with tweets. First, we crawl over 2M tweets from twitter using GetOldTweets-python. We crawl English tweets from 04/09/2012 to /12/18/2018. We first remove all re-tweets and use langdetect to remove all non-English sentences. Then, we remove hashtags attached at the end of the tweets because they are usually not parts of sentences and will confuse our language model. After that, we utilize Ekphrasis to process tweets. We remove URLs and restore remaining hashtags, elongated words, repeated words, and all-capitalized words. To simplify our dataset, We replace all “ INLINEFORM0 money INLINEFORM1 \" and “ INLINEFORM2 time INLINEFORM3 \" tokens with “ INLINEFORM4 number INLINEFORM5 \" token when using Ekphrasis. And we delete sentences whose lengths are less than 10 or greater than 40. In order to restore abbreviations, we download an abbreviation dictionary from webopedia and restore abbreviations to normal words or phrases according to the dictionary. Finally, we remove sentences which have more than two rare words (appearing less than three times) in order to constrain the size of vocabulary. Finally, we get 662,530 sentences after pre-processing.",
"As neural networks are proved effective in irony detection, we decide to implement a neural classifier in order to classify the sentences into ironic and non-ironic sentences. However, the only high-quality irony dataset we can obtain is the dataset of Semeval-2018 Task 3 and the dataset is pretty small, which will cause overfitting to complex models. Therefore, we just implement a simple one-layer RNN with LSTM cell to classify pre-processed sentences into ironic sentences and non-ironic sentences because LSTM networks are widely used in irony detection. We train the model with the dataset of Semeval-2018 Task 3. After classification, we get 262,755 ironic sentences and 399,775 non-ironic sentences. According to our observation, not all non-ironic sentences are suitable to be transferred into ironic sentences. For example, “just hanging out . watching . is it monday yet\" is hard to transfer because it does not have an explicit sentiment polarity. So we remove all interrogative sentences from the non-ironic sentences and only obtain the sentences which have words expressing strong sentiments. We evaluate the sentiment polarity of each word with TextBlob and we view those words with sentiment scores greater than 0.5 or less than -0.5 as words expressing strong sentiments. Finally, we build our irony dataset with 262,755 ironic sentences and 102,330 non-ironic sentences.",
"[t] Irony Generation Algorithm ",
" INLINEFORM0 pre-train with auto-encoder Pre-train INLINEFORM1 , INLINEFORM2 with INLINEFORM3 using MLE based on Eq. EQREF16 Pre-train INLINEFORM4 , INLINEFORM5 with INLINEFORM6 using MLE based on Eq. EQREF17 INLINEFORM7 pre-train with back-translation Pre-train INLINEFORM8 , INLINEFORM9 , INLINEFORM10 , INLINEFORM11 with INLINEFORM12 using MLE based on Eq. EQREF19 Pre-train INLINEFORM13 , INLINEFORM14 , INLINEFORM15 , INLINEFORM16 with INLINEFORM17 using MLE based on Eq. EQREF20 ",
" INLINEFORM0 train with RL each epoch e = 1, 2, ..., INLINEFORM1 INLINEFORM2 train non-irony2irony with RL INLINEFORM3 in N INLINEFORM4 update INLINEFORM5 , INLINEFORM6 , using INLINEFORM7 based on Eq. EQREF29 INLINEFORM8 back-translation INLINEFORM9 INLINEFORM10 INLINEFORM11 update INLINEFORM12 , INLINEFORM13 , INLINEFORM14 , INLINEFORM15 using MLE based on Eq. EQREF19 INLINEFORM16 train irony2non-irony with RL INLINEFORM17 in I INLINEFORM18 update INLINEFORM19 , INLINEFORM20 , using INLINEFORM21 similar to Eq. EQREF29 INLINEFORM22 back-translation INLINEFORM23 INLINEFORM24 INLINEFORM25 update INLINEFORM26 , INLINEFORM27 , INLINEFORM28 , INLINEFORM29 using MLE based on Eq. EQREF20 "
],
[
"Given two non-parallel corpora: non-ironic corpus N={ INLINEFORM0 , INLINEFORM1 , ..., INLINEFORM2 } and ironic corpus I={ INLINEFORM3 , INLINEFORM4 , ..., INLINEFORM5 }, the goal of our irony generation model is to generate an ironic sentence from a non-ironic sentence while preserving the content and sentiment polarity of the source input sentence. We implement an encoder-decoder framework where two encoders are utilized to encode ironic sentences and non-ironic sentences respectively and two decoders are utilized to decode ironic sentences and non-ironic sentences from latent representations respectively. In order to enforce a shared latent space, we share two layers on both the encoder side and the decoder side. Our model architecture is illustrated in Figure FIGREF13 . We denote irony encoder as INLINEFORM6 , irony decoder as INLINEFORM7 and non-irony encoder as INLINEFORM8 , non-irony decoder as INLINEFORM9 . Their parameters are INLINEFORM10 , INLINEFORM11 , INLINEFORM12 and INLINEFORM13 .",
"Our irony generation algorithm is shown in Algorithm SECREF3 . We first pre-train our model using denoising auto-encoder and back-translation to build up language models for both styles (section SECREF14 ). Then we implement reinforcement learning to train the model to transfer sentences from one style to another (section SECREF21 ). Meanwhile, to achieve content preservation, we utilize back-translation for one time in every INLINEFORM0 time steps."
],
[
"In order to build up our language model and preserve the content, we apply the auto-encoder model. To prevent the model from simply copying the input sentence, we randomly add some noises in the input sentence. Specifically, for every word in the input sentence, there is 10% chance that we delete it, 10 % chance that we duplicate it, 10% chance that we swap it with the next word, or it remains unchanged. We first encode the input sentence INLINEFORM0 or INLINEFORM1 with respective encoder INLINEFORM2 or INLINEFORM3 to obtain its latent representation INLINEFORM4 or INLINEFORM5 and reconstruct the input sentence with the latent representation and respective decoder. So we can get the reconstruction loss for auto-encoder INLINEFORM6 : DISPLAYFORM0 DISPLAYFORM1 ",
"In addition to denoising auto-encoder, we implement back-translation BIBREF19 to generate a pseudo-parallel corpus. Suppose our model takes non-ironic sentence INLINEFORM0 as input. We first encode INLINEFORM1 with INLINEFORM2 to obtain its latent representation INLINEFORM3 and decode the latent representation with INLINEFORM4 to get a transferred sentence INLINEFORM5 . Then we encode INLINEFORM6 with INLINEFORM7 and decode its latent representation with INLINEFORM8 to reconstruct the original input sentence INLINEFORM9 . Therefore, our reconstruction loss for back-translation INLINEFORM10 : DISPLAYFORM0 ",
"And if our model takes ironic sentence INLINEFORM0 as input, we can get the reconstruction loss for back-translation as: DISPLAYFORM0 "
],
[
"Since the gold transferred result of input is unavailable, we cannot evaluate the quality of the generated sentence directly. Therefore, we implement reinforcement learning and elaborately design two rewards to describe the irony accuracy and sentiment preservation, respectively.",
"A pre-trained binary irony classifier based on CNN BIBREF20 is used to evaluate how ironic a sentence is. We denote the parameter of the classifier as INLINEFORM0 and it is fixed during the training process.",
"In order to facilitate the transformation, we design the irony reward as the difference between the irony score of the input sentence and that of the output sentence. Formally, when we input a non-ironic sentence INLINEFORM0 and transfer it to an ironic sentence INLINEFORM1 , our irony reward is defined as: DISPLAYFORM0 ",
"where INLINEFORM0 denotes ironic style and INLINEFORM1 is the probability of that a sentence INLINEFORM2 is ironic.",
"To preserve the sentiment polarity of the input sentence, we also need to use classifiers to evaluate the sentiment polarity of the sentences. However, the sentiment analysis of ironic sentences and non-ironic sentences are different. In the case of figurative languages such as irony, sarcasm or metaphor, the sentiment polarity of the literal meaning may differ significantly from that of the intended figurative meaning BIBREF0 . As we aim to train our model to transfer sentences from non-ironic to ironic, using only one classifier is not enough. As a result, we implement two pre-trained sentiment classifiers for non-ironic sentences and ironic sentences respectively. We denote the parameter of the sentiment classifier for ironic sentences as INLINEFORM0 and that of the sentiment classifier for non-ironic sentences as INLINEFORM1 .",
"A challenge, when we implement two classifiers to evaluate the sentiment polarity, is that the two classifiers trained with different datasets may have different distributions of scores. That means we cannot directly calculate the sentiment reward with scores applied by two classifiers. To alleviate this problem and standardize the prediction results of two classifiers, we set a threshold for each classifier and subtract the respective threshold from scores applied by the classifier to obtain the comparative sentiment polarity score. We get the optimal threshold by maximizing the ability of the classifier according to the distribution of our training data.",
"We denote the threshold of ironic sentiment classifier as INLINEFORM0 and the threshold of non-ironic sentiment classifier as INLINEFORM1 . The standardized sentiment score is defined as INLINEFORM2 and INLINEFORM3 where INLINEFORM4 denotes the positive sentiment polarity and INLINEFORM5 is the probability of that a sentence is positive in sentiment polarity.",
"As mentioned above, the input sentence and the generated sentence should express the same sentiment. For example, if we input a non-ironic sentence “I hate to be ignored\" which is negative in sentiment polarity, the generated ironic sentence should be also negative, such as “I love to be ignored\". To achieve sentiment preservation, we design the sentiment reward as that one minus the absolute value of the difference between the standardized sentiment score of the input sentence and that of the generated sentence. Formally, when we input a non-ironic sentence INLINEFORM0 and transfer it to an ironic sentence INLINEFORM1 , our sentiment reward is defined as: DISPLAYFORM0 ",
"To encourage our model to focus on both the irony accuracy and the sentiment preservation, we apply the harmonic mean of irony reward and sentiment reward: DISPLAYFORM0 "
],
[
"The policy gradient algorithm BIBREF21 is a simple but widely-used algorithm in reinforcement learning. It is used to maximize the expected reward INLINEFORM0 . The objective function to minimize is defined as: DISPLAYFORM0 ",
"where INLINEFORM0 , INLINEFORM1 is the reward of INLINEFORM2 and INLINEFORM3 is the input size."
],
[
" INLINEFORM0 , INLINEFORM1 , INLINEFORM2 and INLINEFORM3 in our model are Transformers BIBREF22 with 4 layers and 2 shared layers. The word embeddings of 128 dimensions are learned during the training process. Our maximum sentence length is set as 40. The optimizer is Adam BIBREF23 and the learning rate is INLINEFORM4 . The batch size is 32 and harmonic weight INLINEFORM5 in Eq.9 is 0.5. We set the interval INLINEFORM6 as 200. The model is pre-trained for 6 epochs and trained for 15 epochs for reinforcement learning.",
"Irony Classifier: We implement a CNN classifier trained with our irony dataset. All the CNN classifiers we utilize in this paper use the same parameters as BIBREF20 .",
"Sentiment Classifier for Irony: We first implement a one-layer LSTM network to classify ironic sentences in our dataset into positive and negative ironies. The LSTM network is trained with the dataset of Semeval 2015 Task 11 BIBREF0 which is used for the sentiment analysis of figurative language in twitter. Then, we use the positive ironies and negative ironies to train the CNN sentiment classifier for irony.",
"Sentiment Classifier for Non-irony: Similar to the training process of the sentiment classifier for irony, we first implement a one-layer LSTM network trained with the dataset for the sentiment analysis of common twitters to classify the non-ironies into positive and negative non-ironies. Then we use the positive and negative non-ironies to train the sentiment classifier for non-irony."
],
[
"We compare our model with the following state-of-art generative models:",
"BackTrans BIBREF7 : In BIBREF7 , authors propose a model using machine translation in order to preserve the meaning of the sentence while reducing stylistic properties.",
"Unpaired BIBREF10 : In BIBREF10 , researchers implement a method to remove emotional words and add desired sentiment controlled by reinforcement learning.",
"CrossAlign BIBREF6 : In BIBREF6 , authors leverage refined alignment of latent representations to perform style transfer and a cross-aligned auto-encoder is implemented.",
"CPTG BIBREF24 : An interpolated reconstruction loss is introduced in BIBREF24 and a discriminator is implemented to control attributes in this work.",
"DualRL BIBREF8 : In BIBREF8 , researchers use two reinforcement rewards simultaneously to control style accuracy and content preservation."
],
[
"In order to evaluate sentiment preservation, we use the absolute value of the difference between the standardized sentiment score of the input sentence and that of the generated sentence. We call the value as sentiment delta (senti delta). Besides, we report the sentiment accuracy (Senti ACC) which measures whether the output sentence has the same sentiment polarity as the input sentence based on our standardized sentiment classifiers. The BLEU score BIBREF25 between the input sentences and the output sentences is calculated to evaluate the content preservation performance. In order to evaluate the overall performance of different models, we also report the geometric mean (G2) and harmonic mean (H2) of the sentiment accuracy and the BLEU score. As for the irony accuracy, we only report it in human evaluation results because it is more accurate for the human to evaluate the quality of irony as it is very complicated.",
"We first sample 50 non-ironic input sentences and their corresponding output sentences of different models. Then, we ask four annotators who are proficient in English to evaluate the qualities of the generated sentences of different models. They are required to rank the output sentences of our model and baselines from the best to the worst in terms of irony accuracy (Irony), Sentiment preservation (Senti) and content preservation (Content). The best output is ranked with 1 and the worst output is ranked with 6. That means that the smaller our human evaluation value is, the better the corresponding model is."
],
[
"Table TABREF35 shows the automatic evaluation results of the models in the transformation from non-ironic sentences to ironic sentences. From the results, our model obtains the best result in sentiment delta. The DualRL model achieves the highest result in other metrics, but most of its outputs are the almost same as the input sentences. So it is reasonable that DualRL system outperforms ours in these metrics but it actually does not transfer the non-ironic sentences to ironic sentences at all. From this perspective, we cannot view DualRL as an effective model for irony generation. In contrast, our model gets results close to those of DualRL and obtains a balance between irony accuracy, sentiment preservation, and content preservation if we also consider the irony accuracy discussed below.",
"And from human evaluation results shown in Table TABREF36 , our model gets the best average rank in irony accuracy. And as mentioned above, the DualRL model usually does not change the input sentence and outputs the same sentence. Therefore, it is reasonable that it obtains the best rank in sentiment and content preservation and ours is the second. However, it still demonstrates that our model, instead of changing nothing, transfers the style of the input sentence with content and sentiment preservation at the same time."
],
[
"In the section, we present some example outputs of different models. Table TABREF37 shows the results of the transformation from non-ironic sentences to ironic sentences. We can observe that: (1) The BackTrans system, the Unpaired system, the CrossAlign system and the CPTG system tends to generate sentences which are towards irony but do not preserve content. (2) The DualRL system preserves content and sentiment very well but even does not change the input sentence. (3) Our model considers both aspects and achieves a better balance among irony accuracy, sentiment and content preservation."
],
[
"Although our model outperforms other style transfer baselines according to automatic and human evaluation results, there are still some failure cases because irony generation is still a very challenging task. We would like to share the issues we meet during our experiments and our solutions to some of them in this section.",
"No Change: As mentioned above, many style transfer models, such as DualRL, tend to make few changes to the input sentence and output the same sentence. Actually, this is a common issue for unsupervised style transfer systems and we also meet it during our experiments. The main reason for the issue is that rewards for content preservation are too prominent and rewards for style accuracy cannot work well. In contrast, in order to guarantee the readability and fluency of the output sentence, we also cannot emphasize too much on rewards for style accuracy because it may cause some other issues such as word repetition mentioned below. A method to solve the problem is tuning hyperparameters and this is also the method we implement in this work. As for content preservation, maybe MLE methods such as back-translation are not enough because they tend to force models to generate specific words. In the future, we should further design some more suitable methods to control content preservation for models without disentangling style and content representations, such as DualRL and ours.",
"Word Repetition: During our experiments, we observe that some of the outputs prefer to repeat the same word as shown in Table TABREF38 . This is because reinforcement learning rewards encourage the model to generate words which can get high scores from classifiers and even back-translation cannot stop it. Our solution is that we can lower the probability of decoding a word in decoders if the word has been generated in the previous time steps during testing. We also try to implement this method during training time but obtain worse performances because it may limit the effects of training. Some previous studies utilize language models to control the fluency of the output sentence and we also try this method. Nonetheless, pre-training a language model with tweets and using it to generate rewards is difficult because tweets are more casual and have more noise. Rewards from that kind of language model are usually not accurate and may confuse the model. In the future, we should come up with better methods to model language fluency with the consideration of irony accuracy, sentiment and content preservation, especially for tweets.",
"Improper Words: As ironic style is hard for our model to learn, it may generate some improper words which make the sentence strange. As the example shown in the Table TABREF38 , the sentiment word in the input sentence is “wonderful\" and the model should change it into a negative word such as “sad\" to make the output sentence ironic. However, the model changes “friday\" and “fifa\" which are not related to ironic styles. We have not found a very effective method to address this issue and maybe we should further explore stronger models to learn ironic styles better."
],
[
"In this section, we describe some additional experiments on the transformation from ironic sentences to non-ironic sentences. Sometimes ironies are hard to understand and may cause misunderstanding, for which our task also explores the transformation from ironic sentences to non-ironic sentences.",
"As shown in Table TABREF46 , we also conduct automatic evaluations and the conclusions are similar to those of the transformation from non-ironic sentences to ironic sentences. As for human evaluation results in Table TABREF47 , our model still can achieve the second-best results in sentiment and content preservation. Nevertheless, DualRL system and ours get poor performances in irony accuracy. The reason may be that the other four baselines tend to generate common and even not fluent sentences which are irrelevant to the input sentences and are hard to be identified as ironies. So annotators usually mark these output sentences as non-ironic sentences, which causes these models to obtain better performances than DualRL and ours but much poorer results in sentiment and content preservation. Some examples are shown in Table TABREF52 ."
],
[
"In this paper, we first systematically define irony generation based on style transfer. Because of the lack of irony data, we make use of twitter and build a large-scale dataset. In order to control irony accuracy, sentiment preservation and content preservation at the same time, we also design a combination of rewards for reinforcement learning and incorporate reinforcement learning with a pre-training process. Experimental results demonstrate that our model outperforms other generative models and our rewards are effective. Although our model design is effective, there are still many errors and we systematically analyze them. In the future, we are interested in exploring these directions and our work may extend to other kinds of ironies which are more difficult to model."
]
],
"section_name": [
"Introduction",
"Related Work",
"Our Dataset",
"Our Method",
"Pretraining",
"Reinforcement Learning",
"Policy Gradient",
"Training Details",
"Baselines",
"Evaluation Metrics",
"Results and Discussions",
"Case Study",
"Error Analysis",
"Additional Experiments",
"Conclusion and Future Work"
]
} | {
"answers": [
{
"annotation_id": [
"25afa4620b5bcade3e3ddcbb5dc52f6c48f370c3"
],
"answer": [
{
"evidence": [
"Irony Classifier: We implement a CNN classifier trained with our irony dataset. All the CNN classifiers we utilize in this paper use the same parameters as BIBREF20 .",
"Sentiment Classifier for Irony: We first implement a one-layer LSTM network to classify ironic sentences in our dataset into positive and negative ironies. The LSTM network is trained with the dataset of Semeval 2015 Task 11 BIBREF0 which is used for the sentiment analysis of figurative language in twitter. Then, we use the positive ironies and negative ironies to train the CNN sentiment classifier for irony.",
"Sentiment Classifier for Non-irony: Similar to the training process of the sentiment classifier for irony, we first implement a one-layer LSTM network trained with the dataset for the sentiment analysis of common twitters to classify the non-ironies into positive and negative non-ironies. Then we use the positive and negative non-ironies to train the sentiment classifier for non-irony.",
"In this section, we describe some additional experiments on the transformation from ironic sentences to non-ironic sentences. Sometimes ironies are hard to understand and may cause misunderstanding, for which our task also explores the transformation from ironic sentences to non-ironic sentences."
],
"extractive_spans": [
"Irony Classifier",
"Sentiment Classifier for Irony",
"Sentiment Classifier for Non-irony",
"transformation from ironic sentences to non-ironic sentences"
],
"free_form_answer": "",
"highlighted_evidence": [
"Irony Classifier: We implement a CNN classifier trained with our irony dataset.",
"Sentiment Classifier for Irony: We first implement a one-layer LSTM network to classify ironic sentences in our dataset into positive and negative ironies.",
"Sentiment Classifier for Non-irony: Similar to the training process of the sentiment classifier for irony, we first implement a one-layer LSTM network trained with the dataset for the sentiment analysis of common twitters to classify the non-ironies into positive and negative non-ironies.",
"In this section, we describe some additional experiments on the transformation from ironic sentences to non-ironic sentences."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"2f5a9abbc4da817206292dd3edad0f7acc0b37a0",
"d32507f5e0ce41caee557d0423de285d0f2376e9"
],
"answer": [
{
"evidence": [
"Since the gold transferred result of input is unavailable, we cannot evaluate the quality of the generated sentence directly. Therefore, we implement reinforcement learning and elaborately design two rewards to describe the irony accuracy and sentiment preservation, respectively."
],
"extractive_spans": [
"irony accuracy",
"sentiment preservation"
],
"free_form_answer": "",
"highlighted_evidence": [
"Therefore, we implement reinforcement learning and elaborately design two rewards to describe the irony accuracy and sentiment preservation, respectively."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Since the gold transferred result of input is unavailable, we cannot evaluate the quality of the generated sentence directly. Therefore, we implement reinforcement learning and elaborately design two rewards to describe the irony accuracy and sentiment preservation, respectively."
],
"extractive_spans": [
" irony accuracy and sentiment preservation"
],
"free_form_answer": "",
"highlighted_evidence": [
"Since the gold transferred result of input is unavailable, we cannot evaluate the quality of the generated sentence directly. Therefore, we implement reinforcement learning and elaborately design two rewards to describe the irony accuracy and sentiment preservation, respectively."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"ad33c64649fe4ff6e9411efa3028e264fd9d8496",
"bb03e82915e6961380c8c79365ebfb40d8da7733"
],
"answer": [
{
"evidence": [
"Although some previous studies focus on irony detection, little attention is paid to irony generation. As ironies can strengthen sentiments and express stronger emotions, we mainly focus on generating ironic sentences. Given a non-ironic sentence, we implement a neural network to transfer it to an ironic sentence and constrain the sentiment polarity of the two sentences to be the same. For example, the input is “I hate it when my plans get ruined\" which is negative in sentiment polarity and the output should be ironic and negative in sentiment as well, such as “I like it when my plans get ruined\". The speaker uses “like\" to be ironic and express his or her negative sentiment. At the same time, our model can preserve contents which are irrelevant to sentiment polarity and irony. According to the categories mentioned in BIBREF5 , irony can be classified into 3 classes: verbal irony by means of a polarity contrast, the sentences containing expression whose polarity is inverted between the intended and the literal evaluation; other types of verbal irony, the sentences that show no polarity contrast between the literal and intended meaning but are still ironic; and situational irony, the sentences that describe situations that fail to meet some expectations. As ironies in the latter two categories are obscure and hard to understand, we decide to only focus on ironies in the first category in this work. For example, our work can be specifically described as: given a sentence “I hate to be ignored\", we train our model to generate an ironic sentence such as “I love to be ignored\". Although there is “love\" in the generated sentence, the speaker still expresses his or her negative sentiment by irony. We also make some explorations in the transformation from ironic sentences to non-ironic sentences at the end of our work. Because of the lack of previous work and baselines on irony generation, we implement our model based on style transfer. Our work will not only provide the first large-scale irony dataset but also make our model as a benchmark for the irony generation."
],
"extractive_spans": [
"obscure and hard to understand",
" lack of previous work and baselines on irony generation"
],
"free_form_answer": "",
"highlighted_evidence": [
"According to the categories mentioned in BIBREF5 , irony can be classified into 3 classes: verbal irony by means of a polarity contrast, the sentences containing expression whose polarity is inverted between the intended and the literal evaluation; other types of verbal irony, the sentences that show no polarity contrast between the literal and intended meaning but are still ironic; and situational irony, the sentences that describe situations that fail to meet some expectations. As ironies in the latter two categories are obscure and hard to understand, we decide to only focus on ironies in the first category in this work. ",
" Because of the lack of previous work and baselines on irony generation, we implement our model based on style transfer. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Although some previous studies focus on irony detection, little attention is paid to irony generation. As ironies can strengthen sentiments and express stronger emotions, we mainly focus on generating ironic sentences. Given a non-ironic sentence, we implement a neural network to transfer it to an ironic sentence and constrain the sentiment polarity of the two sentences to be the same. For example, the input is “I hate it when my plans get ruined\" which is negative in sentiment polarity and the output should be ironic and negative in sentiment as well, such as “I like it when my plans get ruined\". The speaker uses “like\" to be ironic and express his or her negative sentiment. At the same time, our model can preserve contents which are irrelevant to sentiment polarity and irony. According to the categories mentioned in BIBREF5 , irony can be classified into 3 classes: verbal irony by means of a polarity contrast, the sentences containing expression whose polarity is inverted between the intended and the literal evaluation; other types of verbal irony, the sentences that show no polarity contrast between the literal and intended meaning but are still ironic; and situational irony, the sentences that describe situations that fail to meet some expectations. As ironies in the latter two categories are obscure and hard to understand, we decide to only focus on ironies in the first category in this work. For example, our work can be specifically described as: given a sentence “I hate to be ignored\", we train our model to generate an ironic sentence such as “I love to be ignored\". Although there is “love\" in the generated sentence, the speaker still expresses his or her negative sentiment by irony. We also make some explorations in the transformation from ironic sentences to non-ironic sentences at the end of our work. Because of the lack of previous work and baselines on irony generation, we implement our model based on style transfer. Our work will not only provide the first large-scale irony dataset but also make our model as a benchmark for the irony generation."
],
"extractive_spans": [],
"free_form_answer": "ironies are often obscure and hard to understand",
"highlighted_evidence": [
"According to the categories mentioned in BIBREF5 , irony can be classified into 3 classes: verbal irony by means of a polarity contrast, the sentences containing expression whose polarity is inverted between the intended and the literal evaluation; other types of verbal irony, the sentences that show no polarity contrast between the literal and intended meaning but are still ironic; and situational irony, the sentences that describe situations that fail to meet some expectations. As ironies in the latter two categories are obscure and hard to understand, we decide to only focus on ironies in the first category in this work."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"25020c7f3c93ef58604a0310a2329879886e7216",
"39faa4e0a5f762bedc3d975c5db454717c5ee78e"
],
"answer": [
{
"evidence": [
"As neural networks are proved effective in irony detection, we decide to implement a neural classifier in order to classify the sentences into ironic and non-ironic sentences. However, the only high-quality irony dataset we can obtain is the dataset of Semeval-2018 Task 3 and the dataset is pretty small, which will cause overfitting to complex models. Therefore, we just implement a simple one-layer RNN with LSTM cell to classify pre-processed sentences into ironic sentences and non-ironic sentences because LSTM networks are widely used in irony detection. We train the model with the dataset of Semeval-2018 Task 3. After classification, we get 262,755 ironic sentences and 399,775 non-ironic sentences. According to our observation, not all non-ironic sentences are suitable to be transferred into ironic sentences. For example, “just hanging out . watching . is it monday yet\" is hard to transfer because it does not have an explicit sentiment polarity. So we remove all interrogative sentences from the non-ironic sentences and only obtain the sentences which have words expressing strong sentiments. We evaluate the sentiment polarity of each word with TextBlob and we view those words with sentiment scores greater than 0.5 or less than -0.5 as words expressing strong sentiments. Finally, we build our irony dataset with 262,755 ironic sentences and 102,330 non-ironic sentences."
],
"extractive_spans": [],
"free_form_answer": "They developed a classifier to find ironic sentences in twitter data",
"highlighted_evidence": [
"Therefore, we just implement a simple one-layer RNN with LSTM cell to classify pre-processed sentences into ironic sentences and non-ironic sentences because LSTM networks are widely used in irony detection."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In this paper, in order to address the lack of irony data, we first crawl over 2M tweets from twitter to build a dataset with 262,755 ironic and 112,330 non-ironic tweets. Then, due to the lack of parallel data, we propose a novel model to transfer non-ironic sentences to ironic sentences in an unsupervised way. As ironic style is hard to model and describe, we implement our model with the control of classifiers and reinforcement learning. Different from other studies in style transfer, the transformation from non-ironic to ironic sentences has to preserve sentiment polarity as mentioned above. Therefore, we not only design an irony reward to control the irony accuracy and implement denoising auto-encoder and back-translation to control content preservation but also design a sentiment reward to control sentiment preservation."
],
"extractive_spans": [],
"free_form_answer": "by crawling",
"highlighted_evidence": [
"In this paper, in order to address the lack of irony data, we first crawl over 2M tweets from twitter to build a dataset with 262,755 ironic and 112,330 non-ironic tweets. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"213157aa9a49753e00a601139be65c6b5e1f7806",
"9dfb54db05f0d5b5015c04ba4b01191441dce5f8"
],
"answer": [
{
"evidence": [
"In order to evaluate sentiment preservation, we use the absolute value of the difference between the standardized sentiment score of the input sentence and that of the generated sentence. We call the value as sentiment delta (senti delta). Besides, we report the sentiment accuracy (Senti ACC) which measures whether the output sentence has the same sentiment polarity as the input sentence based on our standardized sentiment classifiers. The BLEU score BIBREF25 between the input sentences and the output sentences is calculated to evaluate the content preservation performance. In order to evaluate the overall performance of different models, we also report the geometric mean (G2) and harmonic mean (H2) of the sentiment accuracy and the BLEU score. As for the irony accuracy, we only report it in human evaluation results because it is more accurate for the human to evaluate the quality of irony as it is very complicated."
],
"extractive_spans": [],
"free_form_answer": "Irony accuracy is judged only by human ; senriment preservation and content preservation are judged both by human and using automatic metrics (ACC and BLEU).",
"highlighted_evidence": [
"Besides, we report the sentiment accuracy (Senti ACC) which measures whether the output sentence has the same sentiment polarity as the input sentence based on our standardized sentiment classifiers. The BLEU score BIBREF25 between the input sentences and the output sentences is calculated to evaluate the content preservation performance. In order to evaluate the overall performance of different models, we also report the geometric mean (G2) and harmonic mean (H2) of the sentiment accuracy and the BLEU score. As for the irony accuracy, we only report it in human evaluation results because it is more accurate for the human to evaluate the quality of irony as it is very complicated."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We first sample 50 non-ironic input sentences and their corresponding output sentences of different models. Then, we ask four annotators who are proficient in English to evaluate the qualities of the generated sentences of different models. They are required to rank the output sentences of our model and baselines from the best to the worst in terms of irony accuracy (Irony), Sentiment preservation (Senti) and content preservation (Content). The best output is ranked with 1 and the worst output is ranked with 6. That means that the smaller our human evaluation value is, the better the corresponding model is."
],
"extractive_spans": [
"four annotators who are proficient in English"
],
"free_form_answer": "",
"highlighted_evidence": [
"Then, we ask four annotators who are proficient in English to evaluate the qualities of the generated sentences of different models. They are required to rank the output sentences of our model and baselines from the best to the worst in terms of irony accuracy (Irony), Sentiment preservation (Senti) and content preservation (Content)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no"
],
"question": [
"What experiments are conducted?",
"What is the combination of rewards for reinforcement learning?",
"What are the difficulties in modelling the ironic pattern?",
"How did the authors find ironic data on twitter?",
"Who judged the irony accuracy, sentiment preservation and content preservation?"
],
"question_id": [
"5c3e98e3cebaecd5d3e75ec2c9fc3dd267ac3c83",
"3f0ae9b772eeddfbfd239b7e3196dc6dfa21365f",
"14b8ae5656e7d4ee02237288372d9e682b24fdb8",
"e3a2d8886f03e78ed5e138df870f48635875727e",
"62f27fe08ddb67f16857fab2a8a721926ecbb6fb"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"search_query": [
"humor",
"humor",
"humor",
"humor",
"humor"
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Table 1: Samples in our dataset.",
"Figure 1: Framework of our model. The solid lines denote the transformation from non-ironic sentences to ironic sentences and the dotted lines denote the transformation from ironic sentences to non-ironic sentences.",
"Table 2: Automatic evaluation results of the transformation from non-ironic sentences to ironic sentences.*",
"Table 3: Human evaluation results of the transformation from non-ironic sentences to ironic sentences.",
"Table 4: Example outputs of the transformation from non-ironic sentences to ironic sentences and the transformation from ironic sentences to non-ironic sentences. We use red and blue to annotate the clashes in the sentences.",
"Table 5: Example error outputs of the transformation from non-ironic sentences to ironic sentences. The main errors are colored.",
"Table 6: Automatic evaluation results of the transformation from ironic sentences to non-ironic sentences.",
"Table 7: Human evaluation results of the transformation from ironic sentences to non-ironic sentences. Note that the larger value for irony metric is better here as we generate non-ironic sentences.",
"Table 8: Example outputs of the transformation from ironic sentences to non-ironic sentences. We use red and blue to annotate the clashes in the sentences."
],
"file": [
"3-Table1-1.png",
"4-Figure1-1.png",
"7-Table2-1.png",
"7-Table3-1.png",
"8-Table4-1.png",
"8-Table5-1.png",
"9-Table6-1.png",
"9-Table7-1.png",
"10-Table8-1.png"
]
} | [
"What are the difficulties in modelling the ironic pattern?",
"How did the authors find ironic data on twitter?",
"Who judged the irony accuracy, sentiment preservation and content preservation?"
] | [
[
"1909.06200-Introduction-1"
],
[
"1909.06200-Introduction-3",
"1909.06200-Our Dataset-1"
],
[
"1909.06200-Evaluation Metrics-0",
"1909.06200-Evaluation Metrics-1"
]
] | [
"ironies are often obscure and hard to understand",
"by crawling",
"Irony accuracy is judged only by human ; senriment preservation and content preservation are judged both by human and using automatic metrics (ACC and BLEU)."
] | 133 |
1706.06894 | Stance Detection in Turkish Tweets | Stance detection is a classification problem in natural language processing where for a text and target pair, a class result from the set {Favor, Against, Neither} is expected. It is similar to the sentiment analysis problem but instead of the sentiment of the text author, the stance expressed for a particular target is investigated in stance detection. In this paper, we present a stance detection tweet data set for Turkish comprising stance annotations of these tweets for two popular sports clubs as targets. Additionally, we provide the evaluation results of SVM classifiers for each target on this data set, where the classifiers use unigram, bigram, and hashtag features. This study is significant as it presents one of the initial stance detection data sets proposed so far and the first one for Turkish language, to the best of our knowledge. The data set and the evaluation results of the corresponding SVM-based approaches will form plausible baselines for the comparison of future studies on stance detection. | {
"paragraphs": [
[
"Stance detection (also called stance identification or stance classification) is one of the considerably recent research topics in natural language processing (NLP). It is usually defined as a classification problem where for a text and target pair, the stance of the author of the text for that target is expected as a classification output from the set: {Favor, Against, Neither} BIBREF0 .",
"Stance detection is usually considered as a subtask of sentiment analysis (opinion mining) BIBREF1 topic in NLP. Both are mostly performed on social media texts, particularly on tweets, hence both are important components of social media analysis. Nevertheless, in sentiment analysis, the sentiment of the author of a piece of text usually as Positive, Negative, and Neutral is explored while in stance detection, the stance of the author of the text for a particular target (an entity, event, etc.) either explicitly or implicitly referred to in the text is considered. Like sentiment analysis, stance detection systems can be valuable components of information retrieval and other text analysis systems BIBREF0 .",
"Previous work on stance detection include BIBREF2 where a stance classifier based on sentiment and arguing features is proposed in addition to an arguing lexicon automatically compiled. The ultimate approach performs better than distribution-based and uni-gram-based baseline systems BIBREF2 . In BIBREF3 , the authors show that the use of dialogue structure improves stance detection in on-line debates. In BIBREF4 , Hasan and Ng carry out stance detection experiments using different machine learning algorithms, training data sets, features, and inter-post constraints in on-line debates, and draw insightful conclusions based on these experiments. For instance, they find that sequence models like HMMs perform better at stance detection when compared with non-sequence models like Naive Bayes (NB) BIBREF4 . In another related study BIBREF5 , the authors conclude that topic-independent features can be exploited for disagreement detection in on-line dialogues. The employed features include agreement, cue words, denial, hedges, duration, polarity, and punctuation BIBREF5 . Stance detection on a corpus of student essays is considered in BIBREF6 . After using linguistically-motivated feature sets together with multivalued NB and SVM as the learning models, the authors conclude that they outperform two baseline approaches BIBREF6 . In BIBREF7 , the author claims that Wikipedia can be used to determine stances about controversial topics based on their previous work regarding controversy extraction on the Web.",
"Among more recent related work, in BIBREF8 stance detection for unseen targets is studied and bidirectional conditional encoding is employed. The authors state that their approach achieves state-of-the art performance rates BIBREF8 on SemEval 2016 Twitter Stance Detection corpus BIBREF0 . In BIBREF9 , a stance-community detection approach called SCIFNET is proposed. SCIFNET creates networks of people who are stance targets, automatically from the related document collections BIBREF9 using stance expansion and refinement techniques to arrive at stance-coherent networks. A tweet data set annotated with stance information regarding six predefined targets is proposed in BIBREF10 where this data set is annotated through crowdsourcing. The authors indicate that the data set is also annotated with sentiment information in addition to stance, so it can help reveal associations between stance and sentiment BIBREF10 . Lastly, in BIBREF0 , SemEval 2016's aforementioned shared task on Twitter Stance Detection is described. Also provided are the results of the evaluations of 19 systems participating in two subtasks (one with training data set provided and the other without an annotated data set) of the shared task BIBREF0 .",
"In this paper, we present a tweet data set in Turkish annotated with stance information, where the corresponding annotations are made publicly available. The domain of the tweets comprises two popular football clubs which constitute the targets of the tweets included. We also provide the evaluation results of SVM classifiers (for each target) on this data set using unigram, bigram, and hashtag features.",
"To the best of our knowledge, the current study is the first one to target at stance detection in Turkish tweets. Together with the provided annotated data set and the corresponding evaluations with the aforementioned SVM classifiers which can be used as baseline systems, our study will hopefully help increase social media analysis studies on Turkish content.",
"The rest of the paper is organized as follows: In Section SECREF2 , we describe our tweet data set annotated with the target and stance information. Section SECREF3 includes the details of our SVM-based stance classifiers and their evaluation results with discussions. Section SECREF4 includes future research topics based on the current study, and finally Section SECREF5 concludes the paper with a summary."
],
[
"We have decided to consider tweets about popular sports clubs as our domain for stance detection. Considerable amounts of tweets are being published for sports-related events at every instant. Hence we have determined our targets as Galatasaray (namely Target-1) and Fenerbahçe (namely, Target-2) which are two of the most popular football clubs in Turkey. As is the case for the sentiment analysis tools, the outputs of the stance detection systems on a stream of tweets about these clubs can facilitate the use of the opinions of the football followers by these clubs.",
"In a previous study on the identification of public health-related tweets, two tweet data sets in Turkish (each set containing 1 million random tweets) have been compiled where these sets belong to two different periods of 20 consecutive days BIBREF11 . We have decided to use one of these sets (corresponding to the period between August 18 and September 6, 2015) and firstly filtered the tweets using the possible names used to refer to the target clubs. Then, we have annotated the stance information in the tweets for these targets as Favor or Against. Within the course of this study, we have not considered those tweets in which the target is not explicitly mentioned, as our initial filtering process reveals.",
"For the purposes of the current study, we have not annotated any tweets with the Neither class. This stance class and even finer-grained classes can be considered in further annotation studies. We should also note that in a few tweets, the target of the stance was the management of the club while in some others a particular footballer of the club is praised or criticised. Still, we have considered the club as the target of the stance in all of the cases and carried out our annotations accordingly.",
"At the end of the annotation process, we have annotated 700 tweets, where 175 tweets are in favor of and 175 tweets are against Target-1, and similarly 175 tweets are in favor of and 175 are against Target-2. Hence, our data set is a balanced one although it is currently limited in size. The corresponding stance annotations are made publicly available at http://ceng.metu.edu.tr/ INLINEFORM0 e120329/ Turkish_Stance_Detection_Tweet_Dataset.csv in Comma Separated Values (CSV) format. The file contains three columns with the corresponding headers. The first column is the tweet id of the corresponding tweet, the second column contains the name of the stance target, and the last column includes the stance of the tweet for the target as Favor or Against.",
"To the best of our knowledge, this is the first publicly-available stance-annotated data set for Turkish. Hence, it is a significant resource as there is a scarcity of annotated data sets, linguistic resources, and NLP tools available for Turkish. Additionally, to the best of our knowledge, it is also significant for being the first stance-annotated data set including sports-related tweets, as previous stance detection data sets mostly include on-line texts on political/ethical issues."
],
[
"It is emphasized in the related literature that unigram-based methods are reliable for the stance detection task BIBREF2 and similarly unigram-based models have been used as baseline models in studies such as BIBREF0 . In order to be used as a baseline and reference system for further studies on stance detection in Turkish tweets, we have trained two SVM classifiers (one for each target) using unigrams as features. Before the extraction of unigrams, we have employed automated preprocessing to filter out the stopwords in our annotated data set of 700 tweets. The stopword list used is the list presented in BIBREF12 which, in turn, is the slightly extended version of the stopword list provided in BIBREF13 .",
"We have used the SVM implementation available in the Weka data mining application BIBREF14 where this particular implementation employs the SMO algorithm BIBREF15 to train a classifier with a linear kernel. The 10-fold cross-validation results of the two classifiers are provided in Table TABREF1 using the metrics of precision, recall, and F-Measure.",
"The evaluation results are quite favorable for both targets and particularly higher for Target-1, considering the fact that they are the initial experiments on the data set. The performance of the classifiers is better for the Favor class for both targets when compared with the performance results for the Against class. This outcome may be due to the common use of some terms when expressing positive stance towards sports clubs in Turkish tweets. The same percentage of common terms may not have been observed in tweets during the expression of negative stances towards the targets. Yet, completely the opposite pattern is observed in stance detection results of baseline systems given in BIBREF0 , i.e., better F-Measure rates have been obtained for the Against class when compared with the Favor class BIBREF0 . Some of the baseline systems reported in BIBREF0 are SVM-based systems using unigrams and ngrams as features similar to our study, but their data sets include all three stance classes of Favor, Against, and Neither, while our data set comprises only tweets classified as belonging to Favor or Against classes. Another difference is that the data sets in BIBREF0 have been divided into training and test sets, while in our study we provide 10-fold cross-validation results on the whole data set. On the other hand, we should also note that SVM-based sentiment analysis systems (such as those given in BIBREF16 ) have been reported to achieve better F-Measure rates for the Positive sentiment class when compared with the results obtained for the Negative class. Therefore, our evaluation results for each stance class seem to be in line with such sentiment analysis systems. Yet, further experiments on the extended versions of our data set should be conducted and the results should again be compared with the stance detection results given in the literature.",
"We have also evaluated SVM classifiers which use only bigrams as features, as ngram-based classifiers have been reported to perform better for the stance detection problem BIBREF0 . However, we have observed that using bigrams as the sole features of the SVM classifiers leads to quite poor results. This observation may be due to the relatively limited size of the tweet data set employed. Still, we can conclude that unigram-based features lead to superior results compared to the results obtained using bigrams as features, based on our experiments on our data set. Yet, ngram-based features may be employed on the extended versions of the data set to verify this conclusion within the course of future work.",
"With an intention to exploit the contribution of hashtag use to stance detection, we have also used the existence of hashtags in tweets as an additional feature to unigrams. The corresponding evaluation results of the SVM classifiers using unigrams together the existence of hashtags as features are provided in Table TABREF2 .",
"When the results given in Table TABREF2 are compared with the results in Table TABREF1 , a slight decrease in F-Measure (0.5%) for Target-1 is observed, while the overall F-Measure value for Target-2 has increased by 1.8%. Although we could not derive sound conclusions mainly due to the relatively small size of our data set, the increase in the performance of the SVM classifier Target-2 is an encouraging evidence for the exploitation of hashtags in a stance detection system. We leave other ways of exploiting hashtags for stance detection as a future work.",
"To sum up, our evaluation results are significant as reference results to be used for comparison purposes and provides evidence for the utility of unigram-based and hashtag-related features in SVM classifiers for the stance detection problem in Turkish tweets."
],
[
"Future work based on the current study includes the following:"
],
[
"Stance detection is a considerably new research area in natural language processing and is considered within the scope of the well-studied topic of sentiment analysis. It is the detection of stance within text towards a target which may be explicitly specified in the text or not. In this study, we present a stance-annotated tweet data set in Turkish where the targets of the annotated stances are two popular sports clubs in Turkey. The corresponding annotations are made publicly-available for research purposes. To the best of our knowledge, this is the first stance detection data set for the Turkish language and also the first sports-related stance-annotated data set. Also presented in this study are SVM classifiers (one for each target) utilizing unigram and bigram features in addition to using the existence of hashtags as another feature. 10-fold cross validation results of these classifiers are presented which can be used as reference results by prospective systems. Both the annotated data set and the classifiers with evaluations are significant since they are the initial contributions to stance detection problem in Turkish tweets."
]
],
"section_name": [
"Introduction",
"A Stance Detection Data Set",
"Stance Detection Experiments Using SVM Classifiers",
"Future Prospects",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"c880645e31f7df0fc9b5305226715efa841bb1ed",
"c9b78e318294b52c5a8848846580a1577163ee1a"
],
"answer": [
{
"evidence": [
"We have decided to consider tweets about popular sports clubs as our domain for stance detection. Considerable amounts of tweets are being published for sports-related events at every instant. Hence we have determined our targets as Galatasaray (namely Target-1) and Fenerbahçe (namely, Target-2) which are two of the most popular football clubs in Turkey. As is the case for the sentiment analysis tools, the outputs of the stance detection systems on a stream of tweets about these clubs can facilitate the use of the opinions of the football followers by these clubs.",
"In a previous study on the identification of public health-related tweets, two tweet data sets in Turkish (each set containing 1 million random tweets) have been compiled where these sets belong to two different periods of 20 consecutive days BIBREF11 . We have decided to use one of these sets (corresponding to the period between August 18 and September 6, 2015) and firstly filtered the tweets using the possible names used to refer to the target clubs. Then, we have annotated the stance information in the tweets for these targets as Favor or Against. Within the course of this study, we have not considered those tweets in which the target is not explicitly mentioned, as our initial filtering process reveals.",
"For the purposes of the current study, we have not annotated any tweets with the Neither class. This stance class and even finer-grained classes can be considered in further annotation studies. We should also note that in a few tweets, the target of the stance was the management of the club while in some others a particular footballer of the club is praised or criticised. Still, we have considered the club as the target of the stance in all of the cases and carried out our annotations accordingly."
],
"extractive_spans": [],
"free_form_answer": "tweets are annotated with only Favor or Against for two targets - Galatasaray and Fenerbahçe",
"highlighted_evidence": [
"Fenerbahçe",
"We have decided to consider tweets about popular sports clubs as our domain for stance detection. ",
"Hence we have determined our targets as Galatasaray (namely Target-1) and Fenerbahçe (namely, Target-2) which are two of the most popular football clubs in Turkey. ",
"Then, we have annotated the stance information in the tweets for these targets as Favor or Against.",
"For the purposes of the current study, we have not annotated any tweets with the Neither class."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"69d5e4fd481e6d3e7362d536f4e5c7b63128a7cb"
],
"answer": [
{
"evidence": [
"The evaluation results are quite favorable for both targets and particularly higher for Target-1, considering the fact that they are the initial experiments on the data set. The performance of the classifiers is better for the Favor class for both targets when compared with the performance results for the Against class. This outcome may be due to the common use of some terms when expressing positive stance towards sports clubs in Turkish tweets. The same percentage of common terms may not have been observed in tweets during the expression of negative stances towards the targets. Yet, completely the opposite pattern is observed in stance detection results of baseline systems given in BIBREF0 , i.e., better F-Measure rates have been obtained for the Against class when compared with the Favor class BIBREF0 . Some of the baseline systems reported in BIBREF0 are SVM-based systems using unigrams and ngrams as features similar to our study, but their data sets include all three stance classes of Favor, Against, and Neither, while our data set comprises only tweets classified as belonging to Favor or Against classes. Another difference is that the data sets in BIBREF0 have been divided into training and test sets, while in our study we provide 10-fold cross-validation results on the whole data set. On the other hand, we should also note that SVM-based sentiment analysis systems (such as those given in BIBREF16 ) have been reported to achieve better F-Measure rates for the Positive sentiment class when compared with the results obtained for the Negative class. Therefore, our evaluation results for each stance class seem to be in line with such sentiment analysis systems. Yet, further experiments on the extended versions of our data set should be conducted and the results should again be compared with the stance detection results given in the literature."
],
"extractive_spans": [
"Target-1"
],
"free_form_answer": "",
"highlighted_evidence": [
"The evaluation results are quite favorable for both targets and particularly higher for Target-1, considering the fact that they are the initial experiments on the data set."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"2445953097e02fd266bcbe4c31fe777085565500",
"ecca01454b47543a66d539851f9f213d992bc29e"
],
"answer": [
{
"evidence": [
"With an intention to exploit the contribution of hashtag use to stance detection, we have also used the existence of hashtags in tweets as an additional feature to unigrams. The corresponding evaluation results of the SVM classifiers using unigrams together the existence of hashtags as features are provided in Table TABREF2 ."
],
"extractive_spans": [],
"free_form_answer": "hashtag features contain whether there is any hashtag in the tweet",
"highlighted_evidence": [
"With an intention to exploit the contribution of hashtag use to stance detection, we have also used the existence of hashtags in tweets as an additional feature to unigrams."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"44906931e7528e3e7e3eb546226bef49f7ded00a",
"55857c5ccd933f53fff65bc83365065bb2a951e8"
],
"answer": [
{
"evidence": [
"At the end of the annotation process, we have annotated 700 tweets, where 175 tweets are in favor of and 175 tweets are against Target-1, and similarly 175 tweets are in favor of and 175 are against Target-2. Hence, our data set is a balanced one although it is currently limited in size. The corresponding stance annotations are made publicly available at http://ceng.metu.edu.tr/ INLINEFORM0 e120329/ Turkish_Stance_Detection_Tweet_Dataset.csv in Comma Separated Values (CSV) format. The file contains three columns with the corresponding headers. The first column is the tweet id of the corresponding tweet, the second column contains the name of the stance target, and the last column includes the stance of the tweet for the target as Favor or Against."
],
"extractive_spans": [
"700 "
],
"free_form_answer": "",
"highlighted_evidence": [
"At the end of the annotation process, we have annotated 700 tweets, where 175 tweets are in favor of and 175 tweets are against Target-1, and similarly 175 tweets are in favor of and 175 are against Target-2. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"At the end of the annotation process, we have annotated 700 tweets, where 175 tweets are in favor of and 175 tweets are against Target-1, and similarly 175 tweets are in favor of and 175 are against Target-2. Hence, our data set is a balanced one although it is currently limited in size. The corresponding stance annotations are made publicly available at http://ceng.metu.edu.tr/ INLINEFORM0 e120329/ Turkish_Stance_Detection_Tweet_Dataset.csv in Comma Separated Values (CSV) format. The file contains three columns with the corresponding headers. The first column is the tweet id of the corresponding tweet, the second column contains the name of the stance target, and the last column includes the stance of the tweet for the target as Favor or Against."
],
"extractive_spans": [
"700"
],
"free_form_answer": "",
"highlighted_evidence": [
"At the end of the annotation process, we have annotated 700 tweets, where 175 tweets are in favor of and 175 tweets are against Target-1, and similarly 175 tweets are in favor of and 175 are against Target-2. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"3d4a40cc7fb3f722f1f1b9858096999ece1afcbe",
"7e031f2af7ea603f3126d8568c0239b09fa6fe91"
],
"answer": [
{
"evidence": [
"We have decided to consider tweets about popular sports clubs as our domain for stance detection. Considerable amounts of tweets are being published for sports-related events at every instant. Hence we have determined our targets as Galatasaray (namely Target-1) and Fenerbahçe (namely, Target-2) which are two of the most popular football clubs in Turkey. As is the case for the sentiment analysis tools, the outputs of the stance detection systems on a stream of tweets about these clubs can facilitate the use of the opinions of the football followers by these clubs."
],
"extractive_spans": [
"Galatasaray",
"Fenerbahçe"
],
"free_form_answer": "",
"highlighted_evidence": [
"Hence we have determined our targets as Galatasaray (namely Target-1) and Fenerbahçe (namely, Target-2) which are two of the most popular football clubs in Turkey."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We have decided to consider tweets about popular sports clubs as our domain for stance detection. Considerable amounts of tweets are being published for sports-related events at every instant. Hence we have determined our targets as Galatasaray (namely Target-1) and Fenerbahçe (namely, Target-2) which are two of the most popular football clubs in Turkey. As is the case for the sentiment analysis tools, the outputs of the stance detection systems on a stream of tweets about these clubs can facilitate the use of the opinions of the football followers by these clubs."
],
"extractive_spans": [
"Galatasaray ",
"Fenerbahçe "
],
"free_form_answer": "",
"highlighted_evidence": [
"Hence we have determined our targets as Galatasaray (namely Target-1) and Fenerbahçe (namely, Target-2) which are two of the most popular football clubs in Turkey. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
],
"nlp_background": [
"",
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
"",
""
],
"question": [
"How were the tweets annotated?",
"Which SVM approach resulted in the best performance?",
"What are hashtag features?",
"How many tweets did they collect?",
"Which sports clubs are the targets?"
],
"question_id": [
"9ca447c8959a693a3f7bdd0a2c516f4b86f95718",
"05887a8466e0a2f0df4d6a5ffc5815acd7d9066a",
"c87fcc98625e82fdb494ff0f5309319620d69040",
"500a8ec1c56502529d6e59ba6424331f797f31f0",
"ff6c9af28f0e2bb4fb6a69f124665f8ceb966fbc"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
"",
""
]
} | {
"caption": [],
"file": []
} | [
"How were the tweets annotated?",
"What are hashtag features?"
] | [
[
"1706.06894-A Stance Detection Data Set-0",
"1706.06894-A Stance Detection Data Set-1",
"1706.06894-A Stance Detection Data Set-2"
],
[
"1706.06894-Stance Detection Experiments Using SVM Classifiers-4"
]
] | [
"tweets are annotated with only Favor or Against for two targets - Galatasaray and Fenerbahçe",
"hashtag features contain whether there is any hashtag in the tweet"
] | 134 |
1908.11047 | Shallow Syntax in Deep Water | Shallow syntax provides an approximation of phrase-syntactic structure of sentences; it can be produced with high accuracy, and is computationally cheap to obtain. We investigate the role of shallow syntax-aware representations for NLP tasks using two techniques. First, we enhance the ELMo architecture to allow pretraining on predicted shallow syntactic parses, instead of just raw text, so that contextual embeddings make use of shallow syntactic context. Our second method involves shallow syntactic features obtained automatically on downstream task data. Neither approach leads to a significant gain on any of the four downstream tasks we considered relative to ELMo-only baselines. Further analysis using black-box probes confirms that our shallow-syntax-aware contextual embeddings do not transfer to linguistic tasks any more easily than ELMo's embeddings. We take these findings as evidence that ELMo-style pretraining discovers representations which make additional awareness of shallow syntax redundant. | {
"paragraphs": [
[
"The NLP community is revisiting the role of linguistic structure in applications with the advent of contextual word representations (cwrs) derived from pretraining language models on large corpora BIBREF2, BIBREF3, BIBREF4, BIBREF5. Recent work has shown that downstream task performance may benefit from explicitly injecting a syntactic inductive bias into model architectures BIBREF6, even when cwrs are also used BIBREF7. However, high quality linguistic structure annotation at a large scale remains expensive—a trade-off needs to be made between the quality of the annotations and the computational expense of obtaining them. Shallow syntactic structures (BIBREF8; also called chunk sequences) offer a viable middle ground, by providing a flat, non-hierarchical approximation to phrase-syntactic trees (see Fig. FIGREF1 for an example). These structures can be obtained efficiently, and with high accuracy, using sequence labelers. In this paper we consider shallow syntax to be a proxy for linguistic structure.",
"While shallow syntactic chunks are almost as ubiquitous as part-of-speech tags in standard NLP pipelines BIBREF9, their relative merits in the presence of cwrs remain unclear. We investigate the role of these structures using two methods. First, we enhance the ELMo architecture BIBREF0 to allow pretraining on predicted shallow syntactic parses, instead of just raw text, so that contextual embeddings make use of shallow syntactic context (§SECREF2). Our second method involves classical addition of chunk features to cwr-infused architectures for four different downstream tasks (§SECREF3). Shallow syntactic information is obtained automatically using a highly accurate model (97% $F_1$ on standard benchmarks). In both settings, we observe only modest gains on three of the four downstream tasks relative to ELMo-only baselines (§SECREF4).",
"Recent work has probed the knowledge encoded in cwrs and found they capture a surprisingly large amount of syntax BIBREF10, BIBREF1, BIBREF11. We further examine the contextual embeddings obtained from the enhanced architecture and a shallow syntactic context, using black-box probes from BIBREF1. Our analysis indicates that our shallow-syntax-aware contextual embeddings do not transfer to linguistic tasks any more easily than ELMo embeddings (§SECREF18).",
"Overall, our findings show that while shallow syntax can be somewhat useful, ELMo-style pretraining discovers representations which make additional awareness of shallow syntax largely redundant."
],
[
"We briefly review the shallow syntactic structures used in this work, and then present a model architecture to obtain embeddings from shallow Syntactic Context (mSynC)."
],
[
"Base phrase chunking is a cheap sequence-labeling–based alternative to full syntactic parsing, where the sequence consists of non-overlapping labeled segments (Fig. FIGREF1 includes an example.) Full syntactic trees can be converted into such shallow syntactic chunk sequences using a deterministic procedure BIBREF9. BIBREF12 offered a rule-based transformation deriving non-overlapping chunks from phrase-structure trees as found in the Penn Treebank BIBREF13. The procedure percolates some syntactic phrase nodes from a phrase-syntactic tree to the phrase in the leaves of the tree. All overlapping embedded phrases are then removed, and the remainder of the phrase gets the percolated label—this usually corresponds to the head word of the phrase.",
"In order to obtain shallow syntactic annotations on a large corpus, we train a BiLSTM-CRF model BIBREF14, BIBREF15, which achieves 97% $F_1$ on the CoNLL 2000 benchmark test set. The training data is obtained from the CoNLL 2000 shared task BIBREF12, as well as the remaining sections (except §23 and §20) of the Penn Treebank, using the official script for chunk generation. The standard task definition from the shared task includes eleven chunk labels, as shown in Table TABREF4."
],
[
"Traditional language models are estimated to maximize the likelihood of each word $x_i$ given the words that precede it, $p(x_i \\mid x_{<i})$. Given a corpus that is annotated with shallow syntax, we propose to condition on both the preceding words and their annotations.",
"We associate with each word $x_i$ three additional variables (denoted $c_i$): the indices of the beginning and end of the last completed chunk before $x_i$, and its label. For example, in Fig. FIGREF8, $c_4=\\langle 3, 3, \\text{VP}\\rangle $ for $x_4=\\text{the}$. Chunks, $c$ are only used as conditioning context via $p(x_i \\mid x_{<i}, c_{\\leqslant i})$; they are not predicted. Because the $c$ labels depend on the entire sentence through the CRF chunker, conditioning each word's probability on any $c_i$ means that our model is, strictly speaking, not a language model, and it can no longer be meaningfully evaluated using perplexity.",
"A right-to-left model is constructed analogously, conditioning on $c_{\\geqslant i}$ alongside $x_{>i}$. Following BIBREF2, we use a joint objective maximizing data likelihood objectives in both directions, with shared softmax parameters."
],
[
"Our model uses two encoders: $e_{\\mathit {seq}}$ for encoding the sequential history ($x_{<i}$), and $e_{\\mathit {syn}}$ for shallow syntactic (chunk) history ($c_{\\leqslant i}$). For both, we use transformers BIBREF16, which consist of large feedforward networks equipped with multiheaded self-attention mechanisms.",
"As inputs to $e_{\\mathit {seq}}$, we use a context-independent embedding, obtained from a CNN character encoder BIBREF17 for each token $x_i$. The outputs $h_i$ from $e_{\\mathit {seq}}$ represent words in context.",
"Next, we build representations for (observed) chunks in the sentence by concatenating a learned embedding for the chunk label with $h$s for the boundaries and applying a linear projection ($f_\\mathit {proj}$). The output from $f_\\mathit {proj}$ is input to $e_{\\mathit {syn}}$, the shallow syntactic encoder, and results in contextualized chunk representations, $g$. Note that the number of chunks in the sentence is less than or equal to the number of tokens.",
"Each $h_i$ is now concatentated with $g_{c_i}$, where $g_{c_i}$ corresponds to $c_i$, the last chunk before position $i$. Finally, the output is given by $\\mbox{\\textbf {mSynC}}_i = {u}_\\mathit {proj}(h_i, g_{c_i}) = W^\\top [h_i; g_{c_i}]$, where $W$ is a model parameter. For training, $\\mbox{\\textbf {mSynC}}_i$ is used to compute the probability of the next word, using a sampled softmax BIBREF18. For downstream tasks, we use a learned linear weighting of all layers in the encoders to obtain a task-specific mSynC, following BIBREF2."
],
[
"Jointly training both the sequential encoder $e_{\\mathit {seq}}$, and the syntactic encoder $e_{\\mathit {syn}}$ can be expensive, due to the large number of parameters involved. To reduce cost, we initialize our sequential cwrs $h$, using pretrained embeddings from ELMo-transformer. Once initialized as such, the encoder is fine-tuned to the data likelihood objective (§SECREF5). This results in a staged parameter update, which reduces training duration by a factor of 10 in our experiments. We discuss the empirical effect of this approach in §SECREF20."
],
[
"Our second approach incorporates shallow syntactic information in downstream tasks via token-level chunk label embeddings. Task training (and test) data is automatically chunked, and chunk boundary information is passed into the task model via BIOUL encoding of the labels. We add randomly initialized chunk label embeddings to task-specific input encoders, which are then fine-tuned for task-specific objectives. This approach does not require a shallow syntactic encoder or chunk annotations for pretraining cwrs, only a chunker. Hence, this can more directly measure the impact of shallow syntax for a given task."
],
[
"Our experiments evaluate the effect of shallow syntax, via contextualization (mSynC, §SECREF2) and features (§SECREF3). We provide comparisons with four baselines—ELMo-transformer BIBREF0, our reimplementation of the same, as well as two cwr-free baselines, with and without shallow syntactic features. Both ELMo-transformer and mSynC are trained on the 1B word benchmark corpus BIBREF19; the latter also employs chunk annotations (§SECREF2). Experimental settings are detailed in Appendix §SECREF22."
],
[
"We employ four tasks to test the impact of shallow syntax. The first three, namely, coarse and fine-grained named entity recognition (NER), and constituency parsing, are span-based; the fourth is a sentence-level sentiment classification task. Following BIBREF2, we do not apply finetuning to task-specific architectures, allowing us to do a controlled comparison with ELMo. Given an identical base architecture across models for each task, we can attribute any difference in performance to the incorporation of shallow syntax or contextualization. Details of downstream architectures are provided below, and overall dataset statistics for all tasks is shown in the Appendix, Table TABREF26."
],
[
"We use the English portion of the CoNLL 2003 dataset BIBREF20, which provides named entity annotations on newswire data across four different entity types (PER, LOC, ORG, MISC). A bidirectional LSTM-CRF architecture BIBREF14 and a BIOUL tagging scheme were used."
],
[
"The same architecture and tagging scheme from above is also used to predict fine-grained entity annotations from OntoNotes 5.0 BIBREF21. There are 18 fine-grained NER labels in the dataset, including regular named entitities as well as entities such as date, time and common numerical entries."
],
[
"We use the standard Penn Treebank splits, and adopt the span-based model from BIBREF22. Following their approach, we used predicted part-of-speech tags from the Stanford tagger BIBREF23 for training and testing. About 51% of phrase-syntactic constituents align exactly with the predicted chunks used, with a majority being single-width noun phrases. Given that the rule-based procedure used to obtain chunks only propagates the phrase type to the head-word and removes all overlapping phrases to the right, this is expected. We did not employ jack-knifing to obtain predicted chunks on PTB data; as a result there might be differences in the quality of shallow syntax annotations between the train and test portions of the data."
],
[
"We consider fine-grained (5-class) classification on Stanford Sentiment Treebank BIBREF24. The labels are negative, somewhat_negative, neutral, positive and somewhat_positive. Our model was based on the biattentive classification network BIBREF25. We used all phrase lengths in the dataset for training, but test results are reported only on full sentences, following prior work.",
"Results are shown in Table TABREF12. Consistent with previous findings, cwrs offer large improvements across all tasks. Though helpful to span-level task models without cwrs, shallow syntactic features offer little to no benefit to ELMo models. mSynC's performance is similar. This holds even for phrase-structure parsing, where (gold) chunks align with syntactic phrases, indicating that task-relevant signal learned from exposure to shallow syntax is already learned by ELMo. On sentiment classification, chunk features are slightly harmful on average (but variance is high); mSynC again performs similarly to ELMo-transformer. Overall, the performance differences across all tasks are small enough to infer that shallow syntax is not particularly helpful when using cwrs."
],
[
"We further analyze whether awareness of shallow syntax carries over to other linguistic tasks, via probes from BIBREF1. Probes are linear models trained on frozen cwrs to make predictions about linguistic (syntactic and semantic) properties of words and phrases. Unlike §SECREF11, there is minimal downstream task architecture, bringing into focus the transferability of cwrs, as opposed to task-specific adaptation."
],
[
"The ten different probing tasks we used include CCG supertagging BIBREF26, part-of-speech tagging from PTB BIBREF13 and EWT (Universal Depedencies BIBREF27), named entity recognition BIBREF20, base-phrase chunking BIBREF12, grammar error detection BIBREF28, semantic tagging BIBREF29, preposition supersense identification BIBREF30, and event factuality detection BIBREF31. Metrics and references for each are summarized in Table TABREF27. For more details, please see BIBREF1.",
"Results in Table TABREF13 show ten probes. Again, we see the performance of baseline ELMo-transformer and mSynC are similar, with mSynC doing slightly worse on 7 out of 9 tasks. As we would expect, on the probe for predicting chunk tags, mSynC achieves 96.9 $F_1$ vs. 92.2 $F_1$ for ELMo-transformer, indicating that mSynC is indeed encoding shallow syntax. Overall, the results further confirm that explicit shallow syntax does not offer any benefits over ELMo-transformer."
],
[
"We test whether our staged parameter training (§SECREF9) is a viable alternative to an end-to-end training of both $e_{\\mathit {syn}}$ and $e_{\\mathit {seq}}$. We make a further distinction between fine-tuning $e_{\\mathit {seq}}$ vs. not updating it at all after initialization (frozen).",
"Downstream validation-set $F_1$ on fine-grained NER, reported in Table TABREF21, shows that the end-to-end strategy lags behind the others, perhaps indicating the need to train longer than 10 epochs. However, a single epoch on the 1B-word benchmark takes 36 hours on 2 Tesla V100s, making this prohibitive. Interestingly, the frozen strategy, which takes the least amount of time to converge (24 hours on 1 Tesla V100), also performs almost as well as fine-tuning."
],
[
"We find that exposing cwr-based models to shallow syntax, either through new cwr learning architectures or explicit pipelined features, has little effect on their performance, across several tasks. Linguistic probing also shows that cwrs aware of such structures do not improve task transferability. Our architecture and methods are general enough to be adapted for richer inductive biases, such as those given by full syntactic trees (RNNGs; BIBREF32), or to different pretraining objectives, such as masked language modeling (BERT; BIBREF5); we leave this pursuit to future work."
],
[
"Our baseline pretraining model was a reimplementation of that given in BIBREF0. Hyperparameters were generally identical, but we trained on only 2 GPUs with (up to) 4,000 tokens per batch. This difference in batch size meant we used 6,000 warm up steps with the learning rate schedule of BIBREF16."
],
[
"The function $f_{seq}$ is identical to the 6-layer biLM used in ELMo-transformer. $f_{syn}$, on the other hand, uses only 2 layers. The learned embeddings for the chunk labels have 128 dimensions and are concatenated with the two boundary $h$ of dimension 512. Thus $f_{proj}$ maps $1024 + 128$ dimensions to 512. Further, we did not perform weight averaging over several checkpoints."
],
[
"The size of the shallow syntactic feature embedding was 50 across all experiments, initialized uniform randomly.",
"All model implementations are based on the AllenNLP library BIBREF33."
]
],
"section_name": [
"Introduction",
"Pretraining with Shallow Syntactic Annotations",
"Pretraining with Shallow Syntactic Annotations ::: Shallow Syntax",
"Pretraining with Shallow Syntactic Annotations ::: Pretraining Objective",
"Pretraining with Shallow Syntactic Annotations ::: Pretraining Model Architecture",
"Pretraining with Shallow Syntactic Annotations ::: Pretraining Model Architecture ::: Staged parameter updates",
"Shallow Syntactic Features",
"Experiments",
"Experiments ::: Downstream Task Transfer",
"Experiments ::: Downstream Task Transfer ::: NER",
"Experiments ::: Downstream Task Transfer ::: Fine-grained NER",
"Experiments ::: Downstream Task Transfer ::: Phrase-structure parsing",
"Experiments ::: Downstream Task Transfer ::: Sentiment analysis",
"Experiments ::: Linguistic Probes",
"Experiments ::: Linguistic Probes ::: Probing Tasks",
"Experiments ::: Effect of Training Scheme",
"Conclusion",
"Supplemental Material ::: Hyperparameters ::: ELMo-transformer",
"Supplemental Material ::: Hyperparameters ::: mSynC",
"Supplemental Material ::: Hyperparameters ::: Shallow Syntax"
]
} | {
"answers": [
{
"annotation_id": [
"2bad48da65d39d917a0b83b0a7a4806716d5ae6a",
"2d1e2c82a636511b35c1693fbf825f54f0ffe26d"
],
"answer": [
{
"evidence": [
"Results are shown in Table TABREF12. Consistent with previous findings, cwrs offer large improvements across all tasks. Though helpful to span-level task models without cwrs, shallow syntactic features offer little to no benefit to ELMo models. mSynC's performance is similar. This holds even for phrase-structure parsing, where (gold) chunks align with syntactic phrases, indicating that task-relevant signal learned from exposure to shallow syntax is already learned by ELMo. On sentiment classification, chunk features are slightly harmful on average (but variance is high); mSynC again performs similarly to ELMo-transformer. Overall, the performance differences across all tasks are small enough to infer that shallow syntax is not particularly helpful when using cwrs.",
"FLOAT SELECTED: Table 2: Test-set performance of ELMo-transformer (Peters et al., 2018b), our reimplementation, and mSynC, compared to baselines without CWR. Evaluation metric is F1 for all tasks except sentiment, which reports accuracy. Reported results show the mean and standard deviation across 5 runs for coarse-grained NER and sentiment classification and 3 runs for other tasks."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Results are shown in Table TABREF12. Consistent with previous findings, cwrs offer large improvements across all tasks.",
"FLOAT SELECTED: Table 2: Test-set performance of ELMo-transformer (Peters et al., 2018b), our reimplementation, and mSynC, compared to baselines without CWR. Evaluation metric is F1 for all tasks except sentiment, which reports accuracy. Reported results show the mean and standard deviation across 5 runs for coarse-grained NER and sentiment classification and 3 runs for other tasks."
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"Results are shown in Table TABREF12. Consistent with previous findings, cwrs offer large improvements across all tasks. Though helpful to span-level task models without cwrs, shallow syntactic features offer little to no benefit to ELMo models. mSynC's performance is similar. This holds even for phrase-structure parsing, where (gold) chunks align with syntactic phrases, indicating that task-relevant signal learned from exposure to shallow syntax is already learned by ELMo. On sentiment classification, chunk features are slightly harmful on average (but variance is high); mSynC again performs similarly to ELMo-transformer. Overall, the performance differences across all tasks are small enough to infer that shallow syntax is not particularly helpful when using cwrs."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Overall, the performance differences across all tasks are small enough to infer that shallow syntax is not particularly helpful when using cwrs."
],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"2c6669534d5edca7275dd73d7a46ed0518851f52",
"4fb75ee38e52dde284b5d7b8e16517140e7942ef"
],
"answer": [
{
"evidence": [
"Results in Table TABREF13 show ten probes. Again, we see the performance of baseline ELMo-transformer and mSynC are similar, with mSynC doing slightly worse on 7 out of 9 tasks. As we would expect, on the probe for predicting chunk tags, mSynC achieves 96.9 $F_1$ vs. 92.2 $F_1$ for ELMo-transformer, indicating that mSynC is indeed encoding shallow syntax. Overall, the results further confirm that explicit shallow syntax does not offer any benefits over ELMo-transformer."
],
"extractive_spans": [
"performance of baseline ELMo-transformer and mSynC are similar, with mSynC doing slightly worse on 7 out of 9 tasks"
],
"free_form_answer": "",
"highlighted_evidence": [
"Results in Table TABREF13 show ten probes. Again, we see the performance of baseline ELMo-transformer and mSynC are similar, with mSynC doing slightly worse on 7 out of 9 tasks."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 3: Test performance of ELMo-transformer (Peters et al., 2018b) vs. mSynC on several linguistic probes from Liu et al. (2019). In each case, performance of the best layer from the architecture is reported. Details on the probes can be found in §4.2.1."
],
"extractive_spans": [],
"free_form_answer": "3",
"highlighted_evidence": [
"FLOAT SELECTED: Table 3: Test performance of ELMo-transformer (Peters et al., 2018b) vs. mSynC on several linguistic probes from Liu et al. (2019). In each case, performance of the best layer from the architecture is reported. Details on the probes can be found in §4.2.1."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"6c7a55485e094587e7b5006c5e144b5617228b17",
"7089fbd354e883115b804b860f3aaf1050a94c6a"
],
"answer": [
{
"evidence": [
"Recent work has probed the knowledge encoded in cwrs and found they capture a surprisingly large amount of syntax BIBREF10, BIBREF1, BIBREF11. We further examine the contextual embeddings obtained from the enhanced architecture and a shallow syntactic context, using black-box probes from BIBREF1. Our analysis indicates that our shallow-syntax-aware contextual embeddings do not transfer to linguistic tasks any more easily than ELMo embeddings (§SECREF18).",
"FLOAT SELECTED: Table 6: Dataset and metrics for each probing task from Liu et al. (2019), corresponding to Table 3."
],
"extractive_spans": [],
"free_form_answer": "CCG Supertagging CCGBank , PTB part-of-speech tagging, EWT part-of-speech tagging,\nChunking, Named Entity Recognition, Semantic Tagging, Grammar Error Detection, Preposition Supersense Role, Preposition Supersense Function, Event Factuality Detection",
"highlighted_evidence": [
" We further examine the contextual embeddings obtained from the enhanced architecture and a shallow syntactic context, using black-box probes from BIBREF1",
"FLOAT SELECTED: Table 6: Dataset and metrics for each probing task from Liu et al. (2019), corresponding to Table 3."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We further analyze whether awareness of shallow syntax carries over to other linguistic tasks, via probes from BIBREF1. Probes are linear models trained on frozen cwrs to make predictions about linguistic (syntactic and semantic) properties of words and phrases. Unlike §SECREF11, there is minimal downstream task architecture, bringing into focus the transferability of cwrs, as opposed to task-specific adaptation."
],
"extractive_spans": [
"Probes are linear models trained on frozen cwrs to make predictions about linguistic (syntactic and semantic) properties of words and phrases."
],
"free_form_answer": "",
"highlighted_evidence": [
"Probes are linear models trained on frozen cwrs to make predictions about linguistic (syntactic and semantic) properties of words and phrases."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"9b906792c66d9528a8aa00d018db288b798b96dd",
"bbc515ce561a7876fcafb943dce83d9e2ad8179b"
],
"answer": [
{
"evidence": [
"Results in Table TABREF13 show ten probes. Again, we see the performance of baseline ELMo-transformer and mSynC are similar, with mSynC doing slightly worse on 7 out of 9 tasks. As we would expect, on the probe for predicting chunk tags, mSynC achieves 96.9 $F_1$ vs. 92.2 $F_1$ for ELMo-transformer, indicating that mSynC is indeed encoding shallow syntax. Overall, the results further confirm that explicit shallow syntax does not offer any benefits over ELMo-transformer."
],
"extractive_spans": [
"only modest gains on three of the four downstream tasks"
],
"free_form_answer": "",
"highlighted_evidence": [
"Results in Table TABREF13 show ten probes. Again, we see the performance of baseline ELMo-transformer and mSynC are similar, with mSynC doing slightly worse on 7 out of 9 tasks. As we would expect, on the probe for predicting chunk tags, mSynC achieves 96.9 $F_1$ vs. 92.2 $F_1$ for ELMo-transformer, indicating that mSynC is indeed encoding shallow syntax. Overall, the results further confirm that explicit shallow syntax does not offer any benefits over ELMo-transformer."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 2: Test-set performance of ELMo-transformer (Peters et al., 2018b), our reimplementation, and mSynC, compared to baselines without CWR. Evaluation metric is F1 for all tasks except sentiment, which reports accuracy. Reported results show the mean and standard deviation across 5 runs for coarse-grained NER and sentiment classification and 3 runs for other tasks.",
"Results are shown in Table TABREF12. Consistent with previous findings, cwrs offer large improvements across all tasks. Though helpful to span-level task models without cwrs, shallow syntactic features offer little to no benefit to ELMo models. mSynC's performance is similar. This holds even for phrase-structure parsing, where (gold) chunks align with syntactic phrases, indicating that task-relevant signal learned from exposure to shallow syntax is already learned by ELMo. On sentiment classification, chunk features are slightly harmful on average (but variance is high); mSynC again performs similarly to ELMo-transformer. Overall, the performance differences across all tasks are small enough to infer that shallow syntax is not particularly helpful when using cwrs."
],
"extractive_spans": [
" the performance differences across all tasks are small enough "
],
"free_form_answer": "",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: Test-set performance of ELMo-transformer (Peters et al., 2018b), our reimplementation, and mSynC, compared to baselines without CWR. Evaluation metric is F1 for all tasks except sentiment, which reports accuracy. Reported results show the mean and standard deviation across 5 runs for coarse-grained NER and sentiment classification and 3 runs for other tasks.",
"Overall, the performance differences across all tasks are small enough to infer that shallow syntax is not particularly helpful when using cwrs"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"dd4ef3cd5a919bad01faff9248aec23f5aac0c51"
],
"answer": [
{
"evidence": [
"Our second approach incorporates shallow syntactic information in downstream tasks via token-level chunk label embeddings. Task training (and test) data is automatically chunked, and chunk boundary information is passed into the task model via BIOUL encoding of the labels. We add randomly initialized chunk label embeddings to task-specific input encoders, which are then fine-tuned for task-specific objectives. This approach does not require a shallow syntactic encoder or chunk annotations for pretraining cwrs, only a chunker. Hence, this can more directly measure the impact of shallow syntax for a given task."
],
"extractive_spans": [
"token-level chunk label embeddings",
" chunk boundary information is passed into the task model via BIOUL encoding of the labels"
],
"free_form_answer": "",
"highlighted_evidence": [
"Our second approach incorporates shallow syntactic information in downstream tasks via token-level chunk label embeddings. Task training (and test) data is automatically chunked, and chunk boundary information is passed into the task model via BIOUL encoding of the labels. We add randomly initialized chunk label embeddings to task-specific input encoders, which are then fine-tuned for task-specific objectives."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five",
"five"
],
"paper_read": [
"somewhat",
"somewhat",
"somewhat",
"somewhat",
"somewhat"
],
"question": [
"Does this method help in sentiment classification task improvement?",
"For how many probe tasks the shallow-syntax-aware contextual embedding perform better than ELMo’s embedding?",
"What are the black-box probes used?",
"What are improvements for these two approaches relative to ELMo-only baselines?",
"Which syntactic features are obtained automatically on downstream task data?"
],
"question_id": [
"f2155dc4aeab86bf31a838c8ff388c85440fce6e",
"ed6a15f0f7fa4594e51d5bde21cc0c6c1bedbfdc",
"4d706ce5bde82caf40241f5b78338ea5ee5eb01e",
"86bf75245358f17e35fc133e46a92439ac86d472",
"9132d56e26844dc13b3355448d0f14b95bd2178a"
],
"question_writer": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
],
"search_query": [
"",
"",
"",
"",
""
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 1: A sentence with its phrase-syntactic tree (brown) and shallow syntactic (chunk) annotations (red). Nodes in the tree which percolate down as chunk labels are in red. Not all tokens in the sentence get chunk labels; e.g., punctuation is not part of a chunk.",
"Table 1: Shallow syntactic chunk phrase types from CoNLL 2000 shared task (Tjong Kim Sang and Buchholz, 2000) and their occurrence % in the training data.",
"Figure 2: Model architecture for pretraining with shallow syntax. A sequential encoder converts the raw text into CWRs (shown in blue). Observed shallow syntactic structure (chunk boundaries and labels, shown in red) are combined with these CWRs in a shallow syntactic encoder to get contextualized representations for chunks (shown in orange). Both representations are passed through a projection layer to get mSynC embeddings (details shown only in some positions, for clarity), used both for computing the data likelihood, as shown, as well as in downstream tasks.",
"Table 2: Test-set performance of ELMo-transformer (Peters et al., 2018b), our reimplementation, and mSynC, compared to baselines without CWR. Evaluation metric is F1 for all tasks except sentiment, which reports accuracy. Reported results show the mean and standard deviation across 5 runs for coarse-grained NER and sentiment classification and 3 runs for other tasks.",
"Table 3: Test performance of ELMo-transformer (Peters et al., 2018b) vs. mSynC on several linguistic probes from Liu et al. (2019). In each case, performance of the best layer from the architecture is reported. Details on the probes can be found in §4.2.1.",
"Table 4: Validation F1 for fine-grained NER across syntactic pretraining schemes, with mean and standard deviations across 3 runs.",
"Table 5: Downstream dataset statistics describing the number of train, heldout and test set instances for each task.",
"Table 6: Dataset and metrics for each probing task from Liu et al. (2019), corresponding to Table 3."
],
"file": [
"1-Figure1-1.png",
"2-Table1-1.png",
"3-Figure2-1.png",
"5-Table2-1.png",
"5-Table3-1.png",
"5-Table4-1.png",
"8-Table5-1.png",
"8-Table6-1.png"
]
} | [
"For how many probe tasks the shallow-syntax-aware contextual embedding perform better than ELMo’s embedding?",
"What are the black-box probes used?"
] | [
[
"1908.11047-5-Table3-1.png",
"1908.11047-Experiments ::: Linguistic Probes ::: Probing Tasks-1"
],
[
"1908.11047-Experiments ::: Linguistic Probes-0",
"1908.11047-8-Table6-1.png",
"1908.11047-Introduction-2"
]
] | [
"3",
"CCG Supertagging CCGBank , PTB part-of-speech tagging, EWT part-of-speech tagging,\nChunking, Named Entity Recognition, Semantic Tagging, Grammar Error Detection, Preposition Supersense Role, Preposition Supersense Function, Event Factuality Detection"
] | 135 |
1908.09246 | Open Event Extraction from Online Text using a Generative Adversarial Network | To extract the structured representations of open-domain events, Bayesian graphical models have made some progress. However, these approaches typically assume that all words in a document are generated from a single event. While this may be true for short text such as tweets, such an assumption does not generally hold for long text such as news articles. Moreover, Bayesian graphical models often rely on Gibbs sampling for parameter inference which may take long time to converge. To address these limitations, we propose an event extraction model based on Generative Adversarial Nets, called Adversarial-neural Event Model (AEM). AEM models an event with a Dirichlet prior and uses a generator network to capture the patterns underlying latent events. A discriminator is used to distinguish documents reconstructed from the latent events and the original documents. A byproduct of the discriminator is that the features generated by the learned discriminator network allow the visualization of the extracted events. Our model has been evaluated on two Twitter datasets and a news article dataset. Experimental results show that our model outperforms the baseline approaches on all the datasets, with more significant improvements observed on the news article dataset where an increase of 15\% is observed in F-measure. | {
"paragraphs": [
[
"With the increasing popularity of the Internet, online texts provided by social media platform (e.g. Twitter) and news media sites (e.g. Google news) have become important sources of real-world events. Therefore, it is crucial to automatically extract events from online texts.",
"Due to the high variety of events discussed online and the difficulty in obtaining annotated data for training, traditional template-based or supervised learning approaches for event extraction are no longer applicable in dealing with online texts. Nevertheless, newsworthy events are often discussed by many tweets or online news articles. Therefore, the same event could be mentioned by a high volume of redundant tweets or news articles. This property inspires the research community to devise clustering-based models BIBREF0 , BIBREF1 , BIBREF2 to discover new or previously unidentified events without extracting structured representations.",
"To extract structured representations of events such as who did what, when, where and why, Bayesian approaches have made some progress. Assuming that each document is assigned to a single event, which is modeled as a joint distribution over the named entities, the date and the location of the event, and the event-related keywords, Zhou et al. zhou2014simple proposed an unsupervised Latent Event Model (LEM) for open-domain event extraction. To address the limitation that LEM requires the number of events to be pre-set, Zhou et al. zhou2017event further proposed the Dirichlet Process Event Mixture Model (DPEMM) in which the number of events can be learned automatically from data. However, both LEM and DPEMM have two limitations: (1) they assume that all words in a document are generated from a single event which can be represented by a quadruple INLINEFORM0 entity, location, keyword, date INLINEFORM1 . However, long texts such news articles often describe multiple events which clearly violates this assumption; (2) During the inference process of both approaches, the Gibbs sampler needs to compute the conditional posterior distribution and assigns an event for each document. This is time consuming and takes long time to converge.",
"To deal with these limitations, in this paper, we propose the Adversarial-neural Event Model (AEM) based on adversarial training for open-domain event extraction. The principle idea is to use a generator network to learn the projection function between the document-event distribution and four event related word distributions (entity distribution, location distribution, keyword distribution and date distribution). Instead of providing an analytic approximation, AEM uses a discriminator network to discriminate between the reconstructed documents from latent events and the original input documents. This essentially helps the generator to construct a more realistic document from a random noise drawn from a Dirichlet distribution. Due to the flexibility of neural networks, the generator is capable of learning complicated nonlinear distributions. And the supervision signal provided by the discriminator will help generator to capture the event-related patterns. Furthermore, the discriminator also provides low-dimensional discriminative features which can be used to visualize documents and events.",
"The main contributions of the paper are summarized below:"
],
[
"Our work is related to two lines of research, event extraction and Generative Adversarial Nets."
],
[
"Recently there has been much interest in event extraction from online texts, and approaches could be categorized as domain-specific and open-domain event extraction.",
"Domain-specific event extraction often focuses on the specific types of events (e.g. sports events or city events). Panem et al. panem2014structured devised a novel algorithm to extract attribute-value pairs and mapped them to manually generated schemes for extracting the natural disaster events. Similarly, to extract the city-traffic related event, Anantharam et al. anantharam2015extracting viewed the task as a sequential tagging problem and proposed an approach based on the conditional random fields. Zhang zhang2018event proposed an event extraction approach based on imitation learning, especially on inverse reinforcement learning. ",
"Open-domain event extraction aims to extract events without limiting the specific types of events. To analyze individual messages and induce a canonical value for each event, Benson et al. benson2011event proposed an approach based on a structured graphical model. By representing an event with a binary tuple which is constituted by a named entity and a date, Ritter et al. ritter2012open employed some statistic to measure the strength of associations between a named entity and a date. The proposed system relies on a supervised labeler trained on annotated data. In BIBREF1 , Abdelhaq et al. developed a real-time event extraction system called EvenTweet, and each event is represented as a triple constituted by time, location and keywords. To extract more information, Wang el al. wang2015seeft developed a system employing the links in tweets and combing tweets with linked articles to identify events. Xia el al. xia2015new combined texts with the location information to detect the events with low spatial and temporal deviations. Zhou et al. zhou2014simple,zhou2017event represented event as a quadruple and proposed two Bayesian models to extract events from tweets."
],
[
"As a neural-based generative model, Generative Adversarial Nets BIBREF3 have been extensively researched in natural language processing (NLP) community.",
"For text generation, the sequence generative adversarial network (SeqGAN) proposed in BIBREF4 incorporated a policy gradient strategy to optimize the generation process. Based on the policy gradient, Lin et al. lin2017adversarial proposed RankGAN to capture the rich structures of language by ranking and analyzing a collection of human-written and machine-written sentences. To overcome mode collapse when dealing with discrete data, Fedus et al. fedus2018maskgan proposed MaskGAN which used an actor-critic conditional GAN to fill in missing text conditioned on the surrounding context. Along this line, Wang et al. wang2018sentigan proposed SentiGAN to generate texts of different sentiment labels. Besides, Li et al. li2018learning improved the performance of semi-supervised text classification using adversarial training, BIBREF5 , BIBREF6 designed GAN-based models for distance supervision relation extraction.",
"Although various GAN based approaches have been explored for many applications, none of these approaches tackles open-domain event extraction from online texts. We propose a novel GAN-based event extraction model called AEM. Compared with the previous models, AEM has the following differences: (1) Unlike most GAN-based text generation approaches, a generator network is employed in AEM to learn the projection function between an event distribution and the event-related word distributions (entity, location, keyword, date). The learned generator captures event-related patterns rather than generating text sequence; (2) Different from LEM and DPEMM, AEM uses a generator network to capture the event-related patterns and is able to mine events from different text sources (short and long). Moreover, unlike traditional inference procedure, such as Gibbs sampling used in LEM and DPEMM, AEM could extract the events more efficiently due to the CUDA acceleration; (3) The discriminative features learned by the discriminator of AEM provide a straightforward way to visualize the extracted events."
],
[
"We describe Adversarial-neural Event Model (AEM) in this section. An event is represented as a quadruple < INLINEFORM0 >, where INLINEFORM1 stands for non-location named entities, INLINEFORM2 for a location, INLINEFORM3 for event-related keywords, INLINEFORM4 for a date, and each component in the tuple is represented by component-specific representative words.",
"AEM is constituted by three components: (1) The document representation module, as shown at the top of Figure FIGREF4 , defines a document representation approach which converts an input document from the online text corpus into INLINEFORM0 which captures the key event elements; (2) The generator INLINEFORM1 , as shown in the lower-left part of Figure FIGREF4 , generates a fake document INLINEFORM2 which is constituted by four multinomial distributions using an event distribution INLINEFORM3 drawn from a Dirichlet distribution as input; (3) The discriminator INLINEFORM4 , as shown in the lower-right part of Figure FIGREF4 , distinguishes the real documents from the fake ones and its output is subsequently employed as a learning signal to update the INLINEFORM5 and INLINEFORM6 . The details of each component are presented below."
],
[
"Each document INLINEFORM0 in a given corpus INLINEFORM1 is represented as a concatenation of 4 multinomial distributions which are entity distribution ( INLINEFORM2 ), location distribution ( INLINEFORM3 ), keyword distribution ( INLINEFORM4 ) and date distribution ( INLINEFORM5 ) of the document. As four distributions are calculated in a similar way, we only describe the computation of the entity distribution below as an example.",
"The entity distribution INLINEFORM0 is represented by a normalized INLINEFORM1 -dimensional vector weighted by TF-IDF, and it is calculated as: INLINEFORM2 ",
" where INLINEFORM0 is the pseudo corpus constructed by removing all non-entity words from INLINEFORM1 , INLINEFORM2 is the total number of distinct entities in a corpus, INLINEFORM3 denotes the number of INLINEFORM4 -th entity appeared in document INLINEFORM5 , INLINEFORM6 represents the number of documents in the corpus, and INLINEFORM7 is the number of documents that contain INLINEFORM8 -th entity, and the obtained INLINEFORM9 denotes the relevance between INLINEFORM10 -th entity and document INLINEFORM11 .",
"Similarly, location distribution INLINEFORM0 , keyword distribution INLINEFORM1 and date distribution INLINEFORM2 of INLINEFORM3 could be calculated in the same way, and the dimensions of these distributions are denoted as INLINEFORM4 , INLINEFORM5 and INLINEFORM6 , respectively. Finally, each document INLINEFORM7 in the corpus is represented by a INLINEFORM8 -dimensional ( INLINEFORM9 = INLINEFORM10 + INLINEFORM11 + INLINEFORM12 + INLINEFORM13 ) vector INLINEFORM14 by concatenating four computed distributions."
],
[
"The generator network INLINEFORM0 is designed to learn the projection function between the document-event distribution INLINEFORM1 and the four document-level word distributions (entity distribution, location distribution, keyword distribution and date distribution).",
"More concretely, INLINEFORM0 consists of a INLINEFORM1 -dimensional document-event distribution layer, INLINEFORM2 -dimensional hidden layer and INLINEFORM3 -dimensional event-related word distribution layer. Here, INLINEFORM4 denotes the event number, INLINEFORM5 is the number of units in the hidden layer, INLINEFORM6 is the vocabulary size and equals to INLINEFORM7 + INLINEFORM8 + INLINEFORM9 + INLINEFORM10 . As shown in Figure FIGREF4 , INLINEFORM11 firstly employs a random document-event distribution INLINEFORM12 as an input. To model the multinomial property of the document-event distribution, INLINEFORM13 is drawn from a Dirichlet distribution parameterized with INLINEFORM14 which is formulated as: DISPLAYFORM0 ",
"where INLINEFORM0 is the hyper-parameter of the dirichlet distribution, INLINEFORM1 is the number of events which should be set in AEM, INLINEFORM2 , INLINEFORM3 represents the proportion of event INLINEFORM4 in the document and INLINEFORM5 .",
"Subsequently, INLINEFORM0 transforms INLINEFORM1 into a INLINEFORM2 -dimensional hidden space using a linear layer followed by layer normalization, and the transformation is defined as: DISPLAYFORM0 ",
" where INLINEFORM0 represents the weight matrix of hidden layer, and INLINEFORM1 denotes the bias term, INLINEFORM2 is the parameter of LeakyReLU activation and is set to 0.1, INLINEFORM3 and INLINEFORM4 denote the normalized hidden states and the outputs of the hidden layer, and INLINEFORM5 represents the layer normalization.",
"Then, to project INLINEFORM0 into four document-level event related word distributions ( INLINEFORM1 , INLINEFORM2 , INLINEFORM3 and INLINEFORM4 shown in Figure FIGREF4 ), four subnets (each contains a linear layer, a batch normalization layer and a softmax layer) are employed in INLINEFORM5 . And the exact transformation is based on the formulas below: DISPLAYFORM0 ",
" where INLINEFORM0 means softmax layer, INLINEFORM1 , INLINEFORM2 , INLINEFORM3 and INLINEFORM4 denote the weight matrices of the linear layers in subnets, INLINEFORM5 , INLINEFORM6 , INLINEFORM7 and INLINEFORM8 represent the corresponding bias terms, INLINEFORM9 , INLINEFORM10 , INLINEFORM11 and INLINEFORM12 are state vectors. INLINEFORM13 , INLINEFORM14 , INLINEFORM15 and INLINEFORM16 denote the generated entity distribution, location distribution, keyword distribution and date distribution, respectively, that correspond to the given event distribution INLINEFORM17 . And each dimension represents the relevance between corresponding entity/location/keyword/date term and the input event distribution.",
"Finally, four generated distributions are concatenated to represent the generated document INLINEFORM0 corresponding to the input INLINEFORM1 : DISPLAYFORM0 ",
"The discriminator network INLINEFORM0 is designed as a fully-connected network which contains an input layer, a discriminative feature layer (discriminative features are employed for event visualization) and an output layer. In AEM, INLINEFORM1 uses fake document INLINEFORM2 and real document INLINEFORM3 as input and outputs the signal INLINEFORM4 to indicate the source of the input data (lower value denotes that INLINEFORM5 is prone to predict the input data as a fake document and vice versa).",
"As have previously been discussed in BIBREF7 , BIBREF8 , lipschitz continuity of INLINEFORM0 network is crucial to the training of the GAN-based approaches. To ensure the lipschitz continuity of INLINEFORM1 , we employ the spectral normalization technique BIBREF9 . More concretely, for each linear layer INLINEFORM2 (bias term is omitted for simplicity) in INLINEFORM3 , the weight matrix INLINEFORM4 is normalized by INLINEFORM5 . Here, INLINEFORM6 is the spectral norm of the weight matrix INLINEFORM7 with the definition below: DISPLAYFORM0 ",
" which is equivalent to the largest singular value of INLINEFORM0 . The weight matrix INLINEFORM1 is then normalized using: DISPLAYFORM0 ",
"Obviously, the normalized weight matrix INLINEFORM0 satisfies that INLINEFORM1 and further ensures the lipschitz continuity of the INLINEFORM2 network BIBREF9 . To reduce the high cost of computing spectral norm INLINEFORM3 using singular value decomposition at each iteration, we follow BIBREF10 and employ the power iteration method to estimate INLINEFORM4 instead. With this substitution, the spectral norm can be estimated with very small additional computational time."
],
[
"The real document INLINEFORM0 and fake document INLINEFORM1 shown in Figure FIGREF4 could be viewed as random samples from two distributions INLINEFORM2 and INLINEFORM3 , and each of them is a joint distribution constituted by four Dirichlet distributions (corresponding to entity distribution, location distribution, keyword distribution and date distribution). The training objective of AEM is to let the distribution INLINEFORM4 (produced by INLINEFORM5 network) to approximate the real data distribution INLINEFORM6 as much as possible.",
"To compare the different GAN losses, Kurach kurach2018gan takes a sober view of the current state of GAN and suggests that the Jansen-Shannon divergence used in BIBREF3 performs more stable than variant objectives. Besides, Kurach also advocates that the gradient penalty (GP) regularization devised in BIBREF8 will further improve the stability of the model. Thus, the objective function of the proposed AEM is defined as: DISPLAYFORM0 ",
" where INLINEFORM0 denotes the discriminator loss, INLINEFORM1 represents the gradient penalty regularization loss, INLINEFORM2 is the gradient penalty coefficient which trade-off the two components of objective, INLINEFORM3 could be obtained by sampling uniformly along a straight line between INLINEFORM4 and INLINEFORM5 , INLINEFORM6 denotes the corresponding distribution.",
"The training procedure of AEM is presented in Algorithm SECREF15 , where INLINEFORM0 is the event number, INLINEFORM1 denotes the number of discriminator iterations per generator iteration, INLINEFORM2 is the batch size, INLINEFORM3 represents the learning rate, INLINEFORM4 and INLINEFORM5 are hyper-parameters of Adam BIBREF11 , INLINEFORM6 denotes INLINEFORM7 . In this paper, we set INLINEFORM8 , INLINEFORM9 , INLINEFORM10 . Moreover, INLINEFORM11 , INLINEFORM12 and INLINEFORM13 are set as 0.0002, 0.5 and 0.999.",
"[!h] Training procedure for AEM [1] INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , INLINEFORM3 , INLINEFORM4 , INLINEFORM5 , INLINEFORM6 the trained INLINEFORM7 and INLINEFORM8 . Initial INLINEFORM9 parameters INLINEFORM10 and INLINEFORM11 parameter INLINEFORM12 INLINEFORM13 has not converged INLINEFORM14 INLINEFORM15 Sample INLINEFORM16 , Sample a random INLINEFORM17 Sample a random number INLINEFORM18 INLINEFORM19 INLINEFORM20 INLINEFORM21 INLINEFORM22 INLINEFORM23 INLINEFORM24 Sample INLINEFORM25 noise INLINEFORM26 INLINEFORM27 "
],
[
"After the model training, the generator INLINEFORM0 learns the mapping function between the document-event distribution and the document-level event-related word distributions (entity, location, keyword and date). In other words, with an event distribution INLINEFORM1 as input, INLINEFORM2 could generate the corresponding entity distribution, location distribution, keyword distribution and date distribution.",
"In AEM, we employ event seed INLINEFORM0 , an INLINEFORM1 -dimensional vector with one-hot encoding, to generate the event related word distributions. For example, in ten event setting, INLINEFORM2 represents the event seed of the first event. With the event seed INLINEFORM3 as input, the corresponding distributions could be generated by INLINEFORM4 based on the equation below: DISPLAYFORM0 ",
" where INLINEFORM0 , INLINEFORM1 , INLINEFORM2 and INLINEFORM3 denote the entity distribution, location distribution, keyword distribution and date distribution of the first event respectively."
],
[
"In this section, we firstly describe the datasets and baseline approaches used in our experiments and then present the experimental results."
],
[
"To validate the effectiveness of AEM for extracting events from social media (e.g. Twitter) and news media sites (e.g. Google news), three datasets (FSD BIBREF12 , Twitter, and Google datasets) are employed. Details are summarized below:",
"",
"FSD dataset (social media) is the first story detection dataset containing 2,499 tweets. We filter out events mentioned in less than 15 tweets since events mentioned in very few tweets are less likely to be significant. The final dataset contains 2,453 tweets annotated with 20 events.",
"",
"Twitter dataset (social media) is collected from tweets published in the month of December in 2010 using Twitter streaming API. It contains 1,000 tweets annotated with 20 events.",
"",
"Google dataset (news article) is a subset of GDELT Event Database INLINEFORM0 , documents are retrieved by event related words. For example, documents which contain `malaysia', `airline', `search' and `plane' are retrieved for event MH370. By combining 30 events related documents, the dataset contains 11,909 news articles.",
"",
"We choose the following three models as the baselines:",
"",
"K-means is a well known data clustering algorithm, we implement the algorithm using sklearn toolbox, and represent documents using bag-of-words weighted by TF-IDF.",
"",
"LEM BIBREF13 is a Bayesian modeling approach for open-domain event extraction. It treats an event as a latent variable and models the generation of an event as a joint distribution of its individual event elements. We implement the algorithm with the default configuration.",
"",
"DPEMM BIBREF14 is a non-parametric mixture model for event extraction. It addresses the limitation of LEM that the number of events should be known beforehand. We implement the model with the default configuration.",
"For social media text corpus (FSD and Twitter), a named entity tagger specifically built for Twitter is used to extract named entities including locations from tweets. A Twitter Part-of-Speech (POS) tagger BIBREF15 is used for POS tagging and only words tagged with nouns, verbs and adjectives are retained as keywords. For the Google dataset, we use the Stanford Named Entity Recognizer to identify the named entities (organization, location and person). Due to the `date' information not being provided in the Google dataset, we further divide the non-location named entities into two categories (`person' and `organization') and employ a quadruple <organization, location, person, keyword> to denote an event in news articles. We also remove common stopwords and only keep the recognized named entities and the tokens which are verbs, nouns or adjectives."
],
[
"To evaluate the performance of the proposed approach, we use the evaluation metrics such as precision, recall and F-measure. Precision is defined as the proportion of the correctly identified events out of the model generated events. Recall is defined as the proportion of correctly identified true events. For calculating the precision of the 4-tuple, we use following criteria:",
"(1) Do the entity/organization, location, date/person and keyword that we have extracted refer to the same event?",
"(2) If the extracted representation contains keywords, are they informative enough to tell us what happened?",
"Table TABREF35 shows the event extraction results on the three datasets. The statistics are obtained with the default parameter setting that INLINEFORM0 is set to 5, number of hidden units INLINEFORM1 is set to 200, and INLINEFORM2 contains three fully-connected layers. The event number INLINEFORM3 for three datasets are set to 25, 25 and 35, respectively. The examples of extracted events are shown in Table. TABREF36 .",
"It can be observed that K-means performs the worst over all three datasets. On the social media datasets, AEM outpoerforms both LEM and DPEMM by 6.5% and 1.7% respectively in F-measure on the FSD dataset, and 4.4% and 3.7% in F-measure on the Twitter dataset. We can also observe that apart from K-means, all the approaches perform worse on the Twitter dataset compared to FSD, possibly due to the limited size of the Twitter dataset. Moreover, on the Google dataset, the proposed AEM performs significantly better than LEM and DPEMM. It improves upon LEM by 15.5% and upon DPEMM by more than 30% in F-measure. This is because: (1) the assumption made by LEM and DPEMM that all words in a document are generated from a single event is not suitable for long text such as news articles; (2) DPEMM generates too many irrelevant events which leads to a very low precision score. Overall, we see the superior performance of AEM across all datasets, with more significant improvement on the for Google datasets (long text).",
"We next visualize the detected events based on the discriminative features learned by the trained INLINEFORM0 network in AEM. The t-SNE BIBREF16 visualization results on the datasets are shown in Figure FIGREF19 . For clarity, each subplot is plotted on a subset of the dataset containing ten randomly selected events. It can be observed that documents describing the same event have been grouped into the same cluster.",
"To further evaluate if a variation of the parameters INLINEFORM0 (the number of discriminator iterations per generator iteration), INLINEFORM1 (the number of units in hidden layer) and the structure of generator INLINEFORM2 will impact the extraction performance, additional experiments have been conducted on the Google dataset, with INLINEFORM3 set to 5, 7 and 10, INLINEFORM4 set to 100, 150 and 200, and three INLINEFORM5 structures (3, 4 and 5 layers). The comparison results on precision, recall and F-measure are shown in Figure FIGREF20 . From the results, it could be observed that AEM with the 5-layer generator performs the best and achieves 96.7% in F-measure, and the worst F-measure obtained by AEM is 85.7%. Overall, the AEM outperforms all compared approaches acorss various parameter settings, showing relatively stable performance.",
"Finally, we compare in Figure FIGREF37 the training time required for each model, excluding the constant time required by each model to load the data. We could observe that K-means runs fastest among all four approaches. Both LEM and DPEMM need to sample the event allocation for each document and update the relevant counts during Gibbs sampling which are time consuming. AEM only requires a fraction of the training time compared to LEM and DPEMM. Moreover, on a larger dataset such as the Google dataset, AEM appears to be far more efficient compared to LEM and DPEMM."
],
[
"In this paper, we have proposed a novel approach based on adversarial training to extract the structured representation of events from online text. The experimental comparison with the state-of-the-art methods shows that AEM achieves improved extraction performance, especially on long text corpora with an improvement of 15% observed in F-measure. AEM only requires a fraction of training time compared to existing Bayesian graphical modeling approaches. In future work, we will explore incorporating external knowledge (e.g. word relatedness contained in word embeddings) into the learning framework for event extraction. Besides, exploring nonparametric neural event extraction approaches and detecting the evolution of events over time from news articles are other promising future directions."
],
[
"We would like to thank anonymous reviewers for their valuable comments and helpful suggestions. This work was funded by the National Key Research and Development Program of China (2016YFC1306704), the National Natural Science Foundation of China (61772132), the Natural Science Foundation of Jiangsu Province of China (BK20161430)."
]
],
"section_name": [
"Introduction",
"Related Work",
"Event Extraction",
"Generative Adversarial Nets",
"Methodology",
"Document Representation",
"Network Architecture",
"Objective and Training Procedure",
"Event Generation",
"Experiments",
"Experimental Setup",
"Experimental Results",
"Conclusions and Future Work",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"3f90db9954e19a7841087563d8e0d715f18a1d47",
"efbcb595cdc9fbf7e483706fd34d1600db9f5c25"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"ada3973fe510cba83e2934306aaefdbda400999c",
"fad11b1258fc0a782b1cdbea1f6a49bfe547d13d"
],
"answer": [
{
"evidence": [
"We choose the following three models as the baselines:",
"K-means is a well known data clustering algorithm, we implement the algorithm using sklearn toolbox, and represent documents using bag-of-words weighted by TF-IDF.",
"LEM BIBREF13 is a Bayesian modeling approach for open-domain event extraction. It treats an event as a latent variable and models the generation of an event as a joint distribution of its individual event elements. We implement the algorithm with the default configuration.",
"DPEMM BIBREF14 is a non-parametric mixture model for event extraction. It addresses the limitation of LEM that the number of events should be known beforehand. We implement the model with the default configuration."
],
"extractive_spans": [
"K-means",
"LEM BIBREF13",
"DPEMM BIBREF14"
],
"free_form_answer": "",
"highlighted_evidence": [
"We choose the following three models as the baselines:\n\nK-means is a well known data clustering algorithm, we implement the algorithm using sklearn toolbox, and represent documents using bag-of-words weighted by TF-IDF.\n\nLEM BIBREF13 is a Bayesian modeling approach for open-domain event extraction. It treats an event as a latent variable and models the generation of an event as a joint distribution of its individual event elements. We implement the algorithm with the default configuration.\n\nDPEMM BIBREF14 is a non-parametric mixture model for event extraction. It addresses the limitation of LEM that the number of events should be known beforehand. We implement the model with the default configuration."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We choose the following three models as the baselines:",
"K-means is a well known data clustering algorithm, we implement the algorithm using sklearn toolbox, and represent documents using bag-of-words weighted by TF-IDF.",
"LEM BIBREF13 is a Bayesian modeling approach for open-domain event extraction. It treats an event as a latent variable and models the generation of an event as a joint distribution of its individual event elements. We implement the algorithm with the default configuration.",
"DPEMM BIBREF14 is a non-parametric mixture model for event extraction. It addresses the limitation of LEM that the number of events should be known beforehand. We implement the model with the default configuration.",
"It can be observed that K-means performs the worst over all three datasets. On the social media datasets, AEM outpoerforms both LEM and DPEMM by 6.5% and 1.7% respectively in F-measure on the FSD dataset, and 4.4% and 3.7% in F-measure on the Twitter dataset. We can also observe that apart from K-means, all the approaches perform worse on the Twitter dataset compared to FSD, possibly due to the limited size of the Twitter dataset. Moreover, on the Google dataset, the proposed AEM performs significantly better than LEM and DPEMM. It improves upon LEM by 15.5% and upon DPEMM by more than 30% in F-measure. This is because: (1) the assumption made by LEM and DPEMM that all words in a document are generated from a single event is not suitable for long text such as news articles; (2) DPEMM generates too many irrelevant events which leads to a very low precision score. Overall, we see the superior performance of AEM across all datasets, with more significant improvement on the for Google datasets (long text)."
],
"extractive_spans": [
"K-means",
"LEM",
"DPEMM"
],
"free_form_answer": "",
"highlighted_evidence": [
"We choose the following three models as the baselines:\n\nK-means is a well known data clustering algorithm, we implement the algorithm using sklearn toolbox, and represent documents using bag-of-words weighted by TF-IDF.",
"LEM BIBREF13 is a Bayesian modeling approach for open-domain event extraction. ",
"DPEMM BIBREF14 is a non-parametric mixture model for event extraction. ",
"It can be observed that K-means performs the worst over all three datasets. On the social media datasets, AEM outpoerforms both LEM and DPEMM by 6.5% and 1.7% respectively in F-measure on the FSD dataset, and 4.4% and 3.7% in F-measure on the Twitter dataset. ",
"Moreover, on the Google dataset, the proposed AEM performs significantly better than LEM and DPEMM."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"5551154b894d01e571ea2b54ba22182a9f705c4b",
"bd23023f6189993b3e9fe3b8c6729198d3b79c51"
],
"answer": [
{
"evidence": [
"To validate the effectiveness of AEM for extracting events from social media (e.g. Twitter) and news media sites (e.g. Google news), three datasets (FSD BIBREF12 , Twitter, and Google datasets) are employed. Details are summarized below:"
],
"extractive_spans": [
"FSD BIBREF12 , Twitter, and Google datasets"
],
"free_form_answer": "",
"highlighted_evidence": [
"To validate the effectiveness of AEM for extracting events from social media (e.g. Twitter) and news media sites (e.g. Google news), three datasets (FSD BIBREF12 , Twitter, and Google datasets) are employed."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"To validate the effectiveness of AEM for extracting events from social media (e.g. Twitter) and news media sites (e.g. Google news), three datasets (FSD BIBREF12 , Twitter, and Google datasets) are employed. Details are summarized below:"
],
"extractive_spans": [
"FSD dataset",
"Twitter dataset",
"Google dataset"
],
"free_form_answer": "",
"highlighted_evidence": [
"To validate the effectiveness of AEM for extracting events from social media (e.g. Twitter) and news media sites (e.g. Google news), three datasets (FSD BIBREF12 , Twitter, and Google datasets) are employed. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"9e6a718458e5d5bce1ff6213aa4f2402e41a23a9"
],
"answer": [
{
"evidence": [
"Although various GAN based approaches have been explored for many applications, none of these approaches tackles open-domain event extraction from online texts. We propose a novel GAN-based event extraction model called AEM. Compared with the previous models, AEM has the following differences: (1) Unlike most GAN-based text generation approaches, a generator network is employed in AEM to learn the projection function between an event distribution and the event-related word distributions (entity, location, keyword, date). The learned generator captures event-related patterns rather than generating text sequence; (2) Different from LEM and DPEMM, AEM uses a generator network to capture the event-related patterns and is able to mine events from different text sources (short and long). Moreover, unlike traditional inference procedure, such as Gibbs sampling used in LEM and DPEMM, AEM could extract the events more efficiently due to the CUDA acceleration; (3) The discriminative features learned by the discriminator of AEM provide a straightforward way to visualize the extracted events."
],
"extractive_spans": [
"generator network to capture the event-related patterns"
],
"free_form_answer": "",
"highlighted_evidence": [
"(2) Different from LEM and DPEMM, AEM uses a generator network to capture the event-related patterns and is able to mine events from different text sources (short and long). Moreover, unlike traditional inference procedure, such as Gibbs sampling used in LEM and DPEMM, AEM could extract the events more efficiently due to the CUDA acceleration;"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"307ceac0671a72a26b7200dce918d4db54da9ebd",
"813139ec619d79bf140ca979ebef362ae80da07f"
],
"answer": [
{
"evidence": [
"To deal with these limitations, in this paper, we propose the Adversarial-neural Event Model (AEM) based on adversarial training for open-domain event extraction. The principle idea is to use a generator network to learn the projection function between the document-event distribution and four event related word distributions (entity distribution, location distribution, keyword distribution and date distribution). Instead of providing an analytic approximation, AEM uses a discriminator network to discriminate between the reconstructed documents from latent events and the original input documents. This essentially helps the generator to construct a more realistic document from a random noise drawn from a Dirichlet distribution. Due to the flexibility of neural networks, the generator is capable of learning complicated nonlinear distributions. And the supervision signal provided by the discriminator will help generator to capture the event-related patterns. Furthermore, the discriminator also provides low-dimensional discriminative features which can be used to visualize documents and events."
],
"extractive_spans": [
"flexibility of neural networks, the generator is capable of learning complicated nonlinear distributions",
"supervision signal provided by the discriminator will help generator to capture the event-related patterns"
],
"free_form_answer": "",
"highlighted_evidence": [
"Instead of providing an analytic approximation, AEM uses a discriminator network to discriminate between the reconstructed documents from latent events and the original input documents. This essentially helps the generator to construct a more realistic document from a random noise drawn from a Dirichlet distribution. Due to the flexibility of neural networks, the generator is capable of learning complicated nonlinear distributions. And the supervision signal provided by the discriminator will help generator to capture the event-related patterns."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"To extract structured representations of events such as who did what, when, where and why, Bayesian approaches have made some progress. Assuming that each document is assigned to a single event, which is modeled as a joint distribution over the named entities, the date and the location of the event, and the event-related keywords, Zhou et al. zhou2014simple proposed an unsupervised Latent Event Model (LEM) for open-domain event extraction. To address the limitation that LEM requires the number of events to be pre-set, Zhou et al. zhou2017event further proposed the Dirichlet Process Event Mixture Model (DPEMM) in which the number of events can be learned automatically from data. However, both LEM and DPEMM have two limitations: (1) they assume that all words in a document are generated from a single event which can be represented by a quadruple INLINEFORM0 entity, location, keyword, date INLINEFORM1 . However, long texts such news articles often describe multiple events which clearly violates this assumption; (2) During the inference process of both approaches, the Gibbs sampler needs to compute the conditional posterior distribution and assigns an event for each document. This is time consuming and takes long time to converge.",
"To deal with these limitations, in this paper, we propose the Adversarial-neural Event Model (AEM) based on adversarial training for open-domain event extraction. The principle idea is to use a generator network to learn the projection function between the document-event distribution and four event related word distributions (entity distribution, location distribution, keyword distribution and date distribution). Instead of providing an analytic approximation, AEM uses a discriminator network to discriminate between the reconstructed documents from latent events and the original input documents. This essentially helps the generator to construct a more realistic document from a random noise drawn from a Dirichlet distribution. Due to the flexibility of neural networks, the generator is capable of learning complicated nonlinear distributions. And the supervision signal provided by the discriminator will help generator to capture the event-related patterns. Furthermore, the discriminator also provides low-dimensional discriminative features which can be used to visualize documents and events."
],
"extractive_spans": [],
"free_form_answer": "by learning a projection function between the document-event distribution and four event related word distributions ",
"highlighted_evidence": [
"However, both LEM and DPEMM have two limitations: (1) they assume that all words in a document are generated from a single event which can be represented by a quadruple INLINEFORM0 entity, location, keyword, date INLINEFORM1 .",
"To deal with these limitations, in this paper, we propose the Adversarial-neural Event Model (AEM) based on adversarial training for open-domain event extraction. The principle idea is to use a generator network to learn the projection function between the document-event distribution and four event related word distributions (entity distribution, location distribution, keyword distribution and date distribution). "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no"
],
"question": [
"Do they report results only on English data?",
"What baseline approaches does this approach out-perform?",
"What datasets are used?",
"What alternative to Gibbs sampling is used?",
"How does this model overcome the assumption that all words in a document are generated from a single event?"
],
"question_id": [
"f3c204723da53c7c8ef4dc1018ffbee545e81056",
"0602a974a879e6eae223cdf048410b5a0111665e",
"56b034c303983b2e276ed6518d6b080f7b8abe6a",
"15e481e668114e4afe0c78eefb716ffe1646b494",
"3d7a982c718ea6bc7e770d8c5da564fbb9d11951"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"search_query": [
"twitter",
"twitter",
"twitter",
"twitter",
"twitter"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: The framework of the Adverarial-neural Event Model (AEM), and ~def , ~d l f , ~d k f and ~d d f denote the generated entity distribution, location distribution, keyword distribution and date distribution corresponding to event distribution ~θ.",
"Figure 2: Visualization of the ten randomly selected events on each dataset. Each point denotes a document. Different color denotes different events.",
"Figure 3: Comparison of methods and parameter settings,‘n’ and ‘h’ denote parameter nd and H , other parameters follow the default setting. The vertical axis represents methods/parameter settings, the horizontal axis denotes the corresponding performance value. All blue histograms with different intensity are those obtained by AEM.",
"Table 1: Comparison of the performance of event extraction on the three datasets.",
"Figure 4: Comparison of training time of models.",
"Table 2: The event examples extracted by AEM."
],
"file": [
"3-Figure1-1.png",
"7-Figure2-1.png",
"7-Figure3-1.png",
"8-Table1-1.png",
"8-Figure4-1.png",
"9-Table2-1.png"
]
} | [
"How does this model overcome the assumption that all words in a document are generated from a single event?"
] | [
[
"1908.09246-Introduction-2",
"1908.09246-Introduction-3"
]
] | [
"by learning a projection function between the document-event distribution and four event related word distributions "
] | 136 |
1612.08205 | Predicting the Industry of Users on Social Media | Automatic profiling of social media users is an important task for supporting a multitude of downstream applications. While a number of studies have used social media content to extract and study collective social attributes, there is a lack of substantial research that addresses the detection of a user's industry. We frame this task as classification using both feature engineering and ensemble learning. Our industry-detection system uses both posted content and profile information to detect a user's industry with 64.3% accuracy, significantly outperforming the majority baseline in a taxonomy of fourteen industry classes. Our qualitative analysis suggests that a person's industry not only affects the words used and their perceived meanings, but also the number and type of emotions being expressed. | {
"paragraphs": [
[
"Over the past two decades, the emergence of social media has enabled the proliferation of traceable human behavior. The content posted by users can reflect who their friends are, what topics they are interested in, or which company they are working for. At the same time, users are listing a number of profile fields to define themselves to others. The utilization of such metadata has proven important in facilitating further developments of applications in advertising BIBREF0 , personalization BIBREF1 , and recommender systems BIBREF2 . However, profile information can be limited, depending on the platform, or it is often deliberately omitted BIBREF3 . To uncloak this information, a number of studies have utilized social media users' footprints to approximate their profiles.",
"This paper explores the potential of predicting a user's industry –the aggregate of enterprises in a particular field– by identifying industry indicative text in social media. The accurate prediction of users' industry can have a big impact on targeted advertising by minimizing wasted advertising BIBREF4 and improved personalized user experience. A number of studies in the social sciences have associated language use with social factors such as occupation, social class, education, and income BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 . An additional goal of this paper is to examine such findings, and in particular the link between language and occupational class, through a data-driven approach.",
"In addition, we explore how meaning changes depending on the occupational context. By leveraging word embeddings, we seek to quantify how, for example, cloud might mean a separate concept (e.g., condensed water vapor) in the text written by users that work in environmental jobs while it might be used differently by users in technology occupations (e.g., Internet-based computing).",
"Specifically, this paper makes four main contributions. First, we build a large, industry-annotated dataset that contains over 20,000 blog users. In addition to their posted text, we also link a number of user metadata including their gender, location, occupation, introduction and interests.",
"Second, we build content-based classifiers for the industry prediction task and study the effect of incorporating textual features from the users' profile metadata using various meta-classification techniques, significantly improving both the overall accuracy and the average per industry accuracy.",
"Next, after examining which words are indicative for each industry, we build vector-space representations of word meanings and calculate one deviation for each industry, illustrating how meaning is differentiated based on the users' industries. We qualitatively examine the resulting industry-informed semantic representations of words by listing the words per industry that are most similar to job related and general interest terms.",
"Finally, we rank the different industries based on the normalized relative frequencies of emotionally charged words (positive and negative) and, in addition, discover that, for both genders, these frequencies do not statistically significantly correlate with an industry's gender dominance ratio.",
"After discussing related work in Section SECREF2 , we present the dataset used in this study in Section SECREF3 . In Section SECREF4 we evaluate two feature selection methods and examine the industry inference problem using the text of the users' postings. We then augment our content-based classifier by building an ensemble that incorporates several metadata classifiers. We list the most industry indicative words and expose how each industrial semantic field varies with respect to a variety of terms in Section SECREF5 . We explore how the frequencies of emotionally charged words in each gender correlate with the industries and their respective gender dominance ratio and, finally, conclude in Section SECREF6 ."
],
[
"Alongside the wide adoption of social media by the public, researchers have been leveraging the newly available data to create and refine models of users' behavior and profiling. There exists a myriad research that analyzes language in order to profile social media users. Some studies sought to characterize users' personality BIBREF9 , BIBREF10 , while others sequenced the expressed emotions BIBREF11 , studied mental disorders BIBREF12 , and the progression of health conditions BIBREF13 . At the same time, a number of researchers sought to predict the social media users' age and/or gender BIBREF14 , BIBREF15 , BIBREF16 , while others targeted and analyzed the ethnicity, nationality, and race of the users BIBREF17 , BIBREF18 , BIBREF19 . One of the profile fields that has drawn a great deal of attention is the location of a user. Among others, Hecht et al. Hecht11 predicted Twitter users' locations using machine learning on nationwide and state levels. Later, Han et al. Han14 identified location indicative words to predict the location of Twitter users down to the city level.",
"As a separate line of research, a number of studies have focused on discovering the political orientation of users BIBREF15 , BIBREF20 , BIBREF21 . Finally, Li et al. Li14a proposed a way to model major life events such as getting married, moving to a new place, or graduating. In a subsequent study, BIBREF22 described a weakly supervised information extraction method that was used in conjunction with social network information to identify the name of a user's spouse, the college they attended, and the company where they are employed.",
"The line of work that is most closely related to our research is the one concerned with understanding the relation between people's language and their industry. Previous research from the fields of psychology and economics have explored the potential for predicting one's occupation from their ability to use math and verbal symbols BIBREF23 and the relationship between job-types and demographics BIBREF24 . More recently, Huang et al. Huang15 used machine learning to classify Sina Weibo users to twelve different platform-defined occupational classes highlighting the effect of homophily in user interactions. This work examined only users that have been verified by the Sina Weibo platform, introducing a potential bias in the resulting dataset. Finally, Preotiuc-Pietro et al. Preoctiuc15 predicted the occupational class of Twitter users using the Standard Occupational Classification (SOC) system, which groups the different jobs based on skill requirements. In that work, the data collection process was limited to only users that specifically mentioned their occupation in their self-description in a way that could be directly mapped to a SOC occupational class. The mapping between a substring of their self-description and a SOC occupational class was done manually. Because of the manual annotation step, their method was not scalable; moreover, because they identified the occupation class inside a user self-description, only a very small fraction of the Twitter users could be included (in their case, 5,191 users).",
"Both of these recent studies are based on micro-blogging platforms, which inherently restrict the number of characters that a post can have, and consequently the way that users can express themselves.",
"Moreover, both studies used off-the-shelf occupational taxonomies (rather than self-declared occupation categories), resulting in classes that are either too generic (e.g., media, welfare and electronic are three of the twelve Sina Weibo categories), or too intermixed (e.g., an assistant accountant is in a different class from an accountant in SOC). To address these limitations, we investigate the industry prediction task in a large blog corpus consisting of over 20K American users, 40K web-blogs, and 560K blog posts."
],
[
"We compile our industry-annotated dataset by identifying blogger profiles located in the U.S. on the profile finder on http://www.blogger.com, and scraping only those users that had the industry profile element completed.",
"For each of these bloggers, we retrieve all their blogs, and for each of these blogs we download the 21 most recent blog postings. We then clean these blog posts of HTML tags and tokenize them, and drop those bloggers whose cumulative textual content in their posts is less than 600 characters. Following these guidelines, we identified all the U.S. bloggers with completed industry information.",
"Traditionally, standardized industry taxonomies organize economic activities into groups based on similar production processes, products or services, delivery systems or behavior in financial markets. Following such assumptions and regardless of their many similarities, a tomato farmer would be categorized into a distinct industry from a tobacco farmer. As demonstrated in Preotiuc-Pietro et al. Preoctiuc15 such groupings can cause unwarranted misclassifications.",
"The Blogger platform provides a total of 39 different industry options. Even though a completed industry value is an implicit text annotation, we acknowledge the same problem noted in previous studies: some categories are too broad, while others are very similar. To remedy this and following Guibert et al. Guibert71, who argued that the denominations used in a classification must reflect the purpose of the study, we group the different Blogger industries based on similar educational background and similar technical terminology. To do that, we exclude very general categories and merge conceptually similar ones. Examples of broad categories are the Education and the Student options: a teacher could be teaching in any concentration, while a student could be enrolled in any discipline. Examples of conceptually similar categories are the Investment Banking and the Banking options.",
"The final set of categories is shown in Table TABREF1 , along with the number of users in each category. The resulting dataset consists of 22,880 users, 41,094 blogs, and 561,003 posts. Table TABREF2 presents additional statistics of our dataset."
],
[
"After collecting our dataset, we split it into three sets: a train set, a development set, and a test set. The sizes of these sets are 17,880, 2,500, and 2,500 users, respectively, with users randomly assigned to these sets. In all the experiments that follow, we evaluate our classifiers by training them on the train set, configure the parameters and measure performance on the development set, and finally report the prediction accuracy and results on the test set. Note that all the experiments are performed at user level, i.e., all the data for one user is compiled into one instance in our data sets.",
"To measure the performance of our classifiers, we use the prediction accuracy. However, as shown in Table TABREF1 , the available data is skewed across categories, which could lead to somewhat distorted accuracy numbers depending on how well a model learns to predict the most populous classes. Moreover, accuracy alone does not provide a great deal of insight into the individual performance per industry, which is one of the main objectives in this study. Therefore, in our results below, we report: (1) micro-accuracy ( INLINEFORM0 ), calculated as the percentage of correctly classified instances out of all the instances in the development (test) data; and (2) macro-accuracy ( INLINEFORM1 ), calculated as the average of the per-category accuracies, where the per-category accuracy is the percentage of correctly classified instances out of the instances belonging to one category in the development (test) data."
],
[
"In this section, we seek the effectiveness of using solely textual features obtained from the users' postings to predict their industry.",
"The industry prediction baseline Majority is set by discovering the most frequently featured class in our training set and picking that class in all predictions in the respective development or testing set.",
"After excluding all the words that are not used by at least three separate users in our training set, we build our AllWords model by counting the frequencies of all the remaining words and training a multinomial Naive Bayes classifier. As seen in Figure FIGREF3 , we can far exceed the Majority baseline performance by incorporating basic language signals into machine learning algorithms (173% INLINEFORM0 improvement).",
"We additionally explore the potential of improving our text classification task by applying a number of feature ranking methods and selecting varying proportions of top ranked features in an attempt to exclude noisy features. We start by ranking the different features, w, according to their Information Gain Ratio score (IGR) with respect to every industry, i, and training our classifier using different proportions of the top features. INLINEFORM0 INLINEFORM1 ",
"Even though we find that using the top 95% of all the features already exceeds the performance of the All Words model on the development data, we further experiment with ranking our features with a more aggressive formula that heavily promotes the features that are tightly associated with any industry category. Therefore, for every word in our training set, we define our newly introduced ranking method, the Aggressive Feature Ranking (AFR), as: INLINEFORM0 ",
"In Figure FIGREF3 we illustrate the performance of all four methods in our industry prediction task on the development data. Note that for each method, we provide both the accuracy ( INLINEFORM0 ) and the average per-class accuracy ( INLINEFORM1 ). The Majority and All Words methods apply to all the features; therefore, they are represented as a straight line in the figure. The IGR and AFR methods are applied to varying subsets of the features using a 5% step.",
"Our experiments demonstrate that the word choice that the users make in their posts correlates with their industry. The first observation in Figure FIGREF3 is that the INLINEFORM0 is proportional to INLINEFORM1 ; as INLINEFORM2 increases, so does INLINEFORM3 . Secondly, the best result on the development set is achieved by using the top 90% of the features using the AFR method. Lastly, the improvements of the IGR and AFR feature selections are not substantially better in comparison to All Words (at most 5% improvement between All Words and AFR), which suggest that only a few noisy features exist and most of the words play some role in shaping the “language\" of an industry.",
"As a final evaluation, we apply on the test data the classifier found to work best on the development data (AFR feature selection, top 90% features), for an INLINEFORM0 of 0.534 and INLINEFORM1 of 0.477."
],
[
"Together with the industry information and the most recent postings of each blogger, we also download a number of accompanying profile elements. Using these additional elements, we explore the potential of incorporating users' metadata in our classifiers.",
"Table TABREF7 shows the different user metadata we consider together with their coverage percentage (not all users provide a value for all of the profile elements). With the exception of the gender field, the remaining metadata elements shown in Table TABREF7 are completed by the users as a freely editable text field. This introduces a considerable amount of noise in the set of possible metadata values. Examples of noise in the occupation field include values such as “Retired”, “I work.”, or “momma” which are not necessarily informative for our industry prediction task.",
"To examine whether the metadata fields can help in the prediction of a user's industry, we build classifiers using the different metadata elements. For each metadata element that has a textual value, we use all the words in the training set for that field as features. The only two exceptions are the state field, which is encoded as one feature that can take one out of 50 different values representing the 50 U.S. states; and the gender field, which is encoded as a feature with a distinct value for each user gender option: undefined, male, or female.",
"As shown in Table TABREF9 , we build four different classifiers using the multinomial NB algorithm: Occu (which uses the words found in the occupation profile element), Intro (introduction), Inter (interests), and Gloc (combined gender, city, state).",
"In general, all the metadata classifiers perform better than our majority baseline ( INLINEFORM0 of 18.88%). For the Gloc classifier, this result is in alignment with previous studies BIBREF24 . However, the only metadata classifier that outperforms the content classifier is the Occu classifier, which despite missing and noisy occupation values exceeds the content classifier's performance by an absolute 3.2%.",
"To investigate the promise of combining the five different classifiers we have built so far, we calculate their inter-prediction agreement using Fleiss's Kappa BIBREF25 , as well as the lower prediction bounds using the double fault measure BIBREF26 . The Kappa values, presented in the lower left side of Table TABREF10 , express the classification agreement for categorical items, in this case the users' industry. Lower values, especially values below 30%, mean smaller agreement. Since all five classifiers have better-than-baseline accuracy, this low agreement suggests that their predictions could potentially be combined to achieve a better accumulated result.",
"Moreover, the double fault measure values, which are presented in the top-right hand side of Table TABREF10 , express the proportion of test cases for which both of the two respective classifiers make false predictions, essentially providing the lowest error bound for the pairwise ensemble classifier performance. The lower those numbers are, the greater the accuracy potential of any meta-classification scheme that combines those classifiers. Once again, the low double fault measure values suggest potential gain from a combination of the base classifiers into an ensemble of models.",
"After establishing the promise of creating an ensemble of classifiers, we implement two meta-classification approaches. First, we combine our classifiers using features concatenation (or early fusion). Starting with our content-based classifier (Text), we successively add the features derived from each metadata element. The results, both micro- and macro-accuracy, are presented in Table TABREF12 . Even though all these four feature concatenation ensembles outperform the content-based classifier in the development set, they fail to outperform the Occu classifier.",
"Second, we explore the potential of using stacked generalization (or late fusion) BIBREF27 . The base classifiers, referred to as L0 classifiers, are trained on different folds of the training set and used to predict the class of the remaining instances. Those predictions are then used together with the true label of the training instances to train a second classifier, referred to as the L1 classifier: this L1 is used to produce the final prediction on both the development data and the test data. Traditionally, stacking uses different machine learning algorithms on the same training data. However in our case, we use the same algorithm (multinomial NB) on heterogeneous data (i.e., different types of data such as content, occupation, introduction, interests, gender, city and state) in order to exploit all available sources of information.",
"The ensemble learning results on the development set are shown in Table TABREF12 . We notice a constant improvement for both metrics when adding more classifiers to our ensemble except for the Gloc classifier, which slightly reduces the performance. The best result is achieved using an ensemble of the Text, Occu, Intro, and Inter L0 classifiers; the respective performance on the test set is an INLINEFORM0 of 0.643 and an INLINEFORM1 of 0.564. Finally, we present in Figure FIGREF11 the prediction accuracy for the final classifier for each of the different industries in our test dataset. Evidently, some industries are easier to predict than others. For example, while the Real Estate and Religion industries achieve accuracy figures above 80%, other industries, such as the Banking industry, are predicted correctly in less than 17% of the time. Anecdotal evidence drawn from the examination of the confusion matrix does not encourage any strong association of the Banking class with any other. The misclassifications are roughly uniform across all other classes, suggesting that the users in the Banking industry use language in a non-distinguishing way."
],
[
"In this section, we provide a qualitative analysis of the language of the different industries."
],
[
"To conduct a qualitative exploration of which words indicate the industry of a user, Table TABREF14 shows the three top-ranking content words for the different industries using the AFR method.",
"Not surprisingly, the top ranked words align well with what we would intuitively expect for each industry. Even though most of these words are potentially used by many users regardless of their industry in our dataset, they are still distinguished by the AFR method because of the different frequencies of these words in the text of each industry.",
""
],
[
"",
"Next, we examine how the meaning of a word is shaped by the context in which it is uttered. In particular, we qualitatively investigate how the speakers' industry affects meaning by learning vector-space representations of words that take into account such contextual information. To achieve this, we apply the contextualized word embeddings proposed by Bamman et al. Bamman14, which are based on an extension of the “skip-gram\" language model BIBREF28 .",
"In addition to learning a global representation for each word, these contextualized embeddings compute one deviation from the common word embedding representation for each contextual variable, in this case, an industry option. These deviations capture the terms' meaning variations (shifts in the INLINEFORM0 -dimensional space of the representations, where INLINEFORM1 in our experiments) in the text of the different industries, however all the embeddings are in the same vector space to allow for comparisons to one another.",
"Using the word representations learned for each industry, we present in Table TABREF16 the terms in the Technology and the Tourism industries that have the highest cosine similarity with a job-related word, customers. Similarly, Table TABREF17 shows the words in the Environment and the Tourism industries that are closest in meaning to a general interest word, food. More examples are given in the Appendix SECREF8 .",
"The terms that rank highest in each industry are noticeably different. For example, as seen in Table TABREF17 , while food in the Environment industry is similar to nutritionally and locally, in the Tourism industry the same word relates more to terms such as delicious and pastries. These results not only emphasize the existing differences in how people in different industries perceive certain terms, but they also demonstrate that those differences can effectively be captured in the resulting word embeddings.",
""
],
[
" As a final analysis, we explore how words that are emotionally charged relate to different industries. To quantify the emotional orientation of a text, we use the Positive Emotion and Negative Emotion categories in the Linguistic Inquiry and Word Count (LIWC) dictionary BIBREF29 . The LIWC dictionary contains lists of words that have been shown to correlate with the psychological states of people that use them; for example, the Positive Emotion category contains words such as “happy,” “pretty,” and “good.”",
"For the text of all the users in each industry we measure the frequencies of Positive Emotion and Negative Emotion words normalized by the text's length. Table TABREF20 presents the industries' ranking for both categories of words based on their relative frequencies in the text of each industry.",
"We further perform a breakdown per-gender, where we once again calculate the proportion of emotionally charged words in each industry, but separately for each gender. We find that the industry rankings of the relative frequencies INLINEFORM0 of emotionally charged words for the two genders are statistically significantly correlated, which suggests that regardless of their gender, users use positive (or negative) words with a relative frequency that correlates with their industry. (In other words, even if e.g., Fashion has a larger number of women users, both men and women working in Fashion will tend to use more positive words than the corresponding gender in another industry with a larger number of men users such as Automotive.)",
"Finally, motivated by previous findings of correlations between job satisfaction and gender dominance in the workplace BIBREF30 , we explore the relationship between the usage of Positive Emotion and Negative Emotion words and the gender dominance in an industry. Although we find that there are substantial gender imbalances in each industry (Appendix SECREF9 ), we did not find any statistically significant correlation between the gender dominance ratio in the different industries and the usage of positive (or negative) emotional words in either gender in our dataset.",
""
],
[
" In this paper, we examined the task of predicting a social media user's industry. We introduced an annotated dataset of over 20,000 blog users and applied a content-based classifier in conjunction with two feature selection methods for an overall accuracy of up to 0.534, which represents a large improvement over the majority class baseline of 0.188.",
"We also demonstrated how the user metadata can be incorporated in our classifiers. Although concatenation of features drawn both from blog content and profile elements did not yield any clear improvements over the best individual classifiers, we found that stacking improves the prediction accuracy to an overall accuracy of 0.643, as measured on our test dataset. A more in-depth analysis showed that not all industries are equally easy to predict: while industries such as Real Estate and Religion are clearly distinguishable with accuracy figures over 0.80, others such as Banking are much harder to predict.",
"Finally, we presented a qualitative analysis to provide some insights into the language of different industries, which highlighted differences in the top-ranked words in each industry, word semantic similarities, and the relative frequency of emotionally charged words."
],
[
"This material is based in part upon work supported by the National Science Foundation (#1344257) and by the John Templeton Foundation (#48503). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation or the John Templeton Foundation."
],
[
"",
""
]
],
"section_name": [
"Introduction",
"Related Work",
"Dataset",
"Text-based Industry Modeling",
"Leveraging Blog Content",
"Leveraging User Metadata",
"Qualitative Analysis",
"Top-Ranked Words",
"Industry-specific Word Similarities",
"Emotional Orientation per Industry and Gender",
"Conclusion",
"Acknowledgments",
"Additional Examples of Word Similarities"
]
} | {
"answers": [
{
"annotation_id": [
"32929172823419690738293baa338ad9a1831e43",
"b1a9077656e00a2c39fb6616743105a93dad1bda"
],
"answer": [
{
"evidence": [
"The final set of categories is shown in Table TABREF1 , along with the number of users in each category. The resulting dataset consists of 22,880 users, 41,094 blogs, and 561,003 posts. Table TABREF2 presents additional statistics of our dataset."
],
"extractive_spans": [
"22,880 users"
],
"free_form_answer": "",
"highlighted_evidence": [
"The resulting dataset consists of 22,880 users, 41,094 blogs, and 561,003 posts. Table TABREF2 presents additional statistics of our dataset."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Specifically, this paper makes four main contributions. First, we build a large, industry-annotated dataset that contains over 20,000 blog users. In addition to their posted text, we also link a number of user metadata including their gender, location, occupation, introduction and interests."
],
"extractive_spans": [
"20,000"
],
"free_form_answer": "",
"highlighted_evidence": [
" First, we build a large, industry-annotated dataset that contains over 20,000 blog users. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"361248c79983aec65ed79dd55d27c29fa19d598f",
"908e3fa95bc1a001f83e0259a19116bcd8537aa8"
],
"answer": [
{
"evidence": [
"This paper explores the potential of predicting a user's industry –the aggregate of enterprises in a particular field– by identifying industry indicative text in social media. The accurate prediction of users' industry can have a big impact on targeted advertising by minimizing wasted advertising BIBREF4 and improved personalized user experience. A number of studies in the social sciences have associated language use with social factors such as occupation, social class, education, and income BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 . An additional goal of this paper is to examine such findings, and in particular the link between language and occupational class, through a data-driven approach."
],
"extractive_spans": [
"the aggregate of enterprises in a particular field"
],
"free_form_answer": "",
"highlighted_evidence": [
"This paper explores the potential of predicting a user's industry –the aggregate of enterprises in a particular field– by identifying industry indicative text in social media. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"This paper explores the potential of predicting a user's industry –the aggregate of enterprises in a particular field– by identifying industry indicative text in social media. The accurate prediction of users' industry can have a big impact on targeted advertising by minimizing wasted advertising BIBREF4 and improved personalized user experience. A number of studies in the social sciences have associated language use with social factors such as occupation, social class, education, and income BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 . An additional goal of this paper is to examine such findings, and in particular the link between language and occupational class, through a data-driven approach."
],
"extractive_spans": [
"the aggregate of enterprises in a particular field"
],
"free_form_answer": "",
"highlighted_evidence": [
"This paper explores the potential of predicting a user's industry –the aggregate of enterprises in a particular field– by identifying industry indicative text in social media. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"ae94f690a8212e097a724778758f00c3cd4a04d0"
],
"answer": [
{
"evidence": [
"After excluding all the words that are not used by at least three separate users in our training set, we build our AllWords model by counting the frequencies of all the remaining words and training a multinomial Naive Bayes classifier. As seen in Figure FIGREF3 , we can far exceed the Majority baseline performance by incorporating basic language signals into machine learning algorithms (173% INLINEFORM0 improvement)."
],
"extractive_spans": [
"AllWords model by counting the frequencies of all the remaining words and training a multinomial Naive Bayes classifier"
],
"free_form_answer": "",
"highlighted_evidence": [
"After excluding all the words that are not used by at least three separate users in our training set, we build our AllWords model by counting the frequencies of all the remaining words and training a multinomial Naive Bayes classifier. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"33a465590d42a2997e222876bb137527383d4203",
"924def1e9f2a7898d69c02466e0ab4ea0d5529fb"
],
"answer": [
{
"evidence": [
"We compile our industry-annotated dataset by identifying blogger profiles located in the U.S. on the profile finder on http://www.blogger.com, and scraping only those users that had the industry profile element completed."
],
"extractive_spans": [
" http://www.blogger.com"
],
"free_form_answer": "",
"highlighted_evidence": [
"We compile our industry-annotated dataset by identifying blogger profiles located in the U.S. on the profile finder on http://www.blogger.com, and scraping only those users that had the industry profile element completed."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We compile our industry-annotated dataset by identifying blogger profiles located in the U.S. on the profile finder on http://www.blogger.com, and scraping only those users that had the industry profile element completed."
],
"extractive_spans": [
"http://www.blogger.com"
],
"free_form_answer": "",
"highlighted_evidence": [
"We compile our industry-annotated dataset by identifying blogger profiles located in the U.S. on the profile finder on http://www.blogger.com, and scraping only those users that had the industry profile element completed."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"c5bde4d70efb2e451459a6c42378613e5866eeb1",
"f3a0e0e9c19cd781a026be2f68a0480277fc3fac"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 1: Industry categories and number of users per category."
],
"extractive_spans": [],
"free_form_answer": "technology, religion, fashion, publishing, sports or recreation, real estate, agriculture/environment, law, security/military, tourism, construction, museums or libraries, banking/investment banking, automotive",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Industry categories and number of users per category."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 7: Three top-ranked words for each industry."
],
"extractive_spans": [],
"free_form_answer": "Technology, Religion, Fashion, Publishing, Sports coach, Real Estate, Law, Environment, Tourism, Construction, Museums, Banking, Security, Automotive.",
"highlighted_evidence": [
"FLOAT SELECTED: Table 7: Three top-ranked words for each industry."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
],
"nlp_background": [
"",
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
"",
""
],
"question": [
"How many users do they look at?",
"What do they mean by a person's industry?",
"What model did they use for their system?",
"What social media platform did they look at?",
"What are the industry classes defined in this paper?"
],
"question_id": [
"692c9c5d9ff9cd3e0ce8b5e4fa68dda9bd23dec1",
"935d6a6187e6a0c9c0da8e53a42697f853f5c248",
"3b77b4defc8a139992bd0b07b5cf718382cb1a5f",
"01a41c0a4a7365cd37d28690735114f2ff5229f2",
"cd2878c5a52542ddf080b20bec005d9a74f2d916"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"social media",
"social media",
"social media",
"social media",
"social media"
],
"topic_background": [
"",
"",
"",
"",
""
]
} | {
"caption": [
"Table 1: Industry categories and number of users per category.",
"Table 2: Statistics on the Blogger dataset.",
"Figure 1: Feature evaluation on the industry prediction task using Information Gain Ratio (IGR) and our Aggressive Feature Ranking (AFR). The performance is measured using both accuracy (mAcc) and average per-class accuracy (MAcc).",
"Table 3: Proportion of users with non-empty metadata fields.",
"Table 4: Accuracy (mAcc) and average per-class accuracy (MAcc) of the base metadata classifiers on the development set.",
"Table 5: Kappa scores and double fault results of the base classifiers on development data.",
"Table 6: Performance of feature concatenation (early fusion) and stacking (late fusion) on the development set.",
"Figure 2: Accuracy per-class using stacking meta-classification.",
"Table 7: Three top-ranked words for each industry.",
"Table 9: Terms with the highest cosine similarity to the term food.",
"Table 8: Terms with the highest cosine similarity to the term customers.",
"Table 10: Ranking of industries based on the relative frequencies fi of Positive Emotion and Negative Emotion words.",
"Table 11: Terms with the highest cosine similarity to the term professional.",
"Table 12: Terms with the highest cosine similarity to the term leisure.",
"Figure 3: Gender dominance for the different industries."
],
"file": [
"2-Table1-1.png",
"2-Table2-1.png",
"3-Figure1-1.png",
"4-Table3-1.png",
"4-Table4-1.png",
"4-Table5-1.png",
"5-Table6-1.png",
"5-Figure2-1.png",
"6-Table7-1.png",
"6-Table9-1.png",
"6-Table8-1.png",
"7-Table10-1.png",
"8-Table11-1.png",
"8-Table12-1.png",
"8-Figure3-1.png"
]
} | [
"What are the industry classes defined in this paper?"
] | [
[
"1612.08205-2-Table1-1.png",
"1612.08205-6-Table7-1.png"
]
] | [
"Technology, Religion, Fashion, Publishing, Sports coach, Real Estate, Law, Environment, Tourism, Construction, Museums, Banking, Security, Automotive."
] | 137 |
1907.09369 | Emotion Detection in Text: Focusing on Latent Representation | In recent years, emotion detection in text has become more popular due to its vast potential applications in marketing, political science, psychology, human-computer interaction, artificial intelligence, etc. In this work, we argue that current methods which are based on conventional machine learning models cannot grasp the intricacy of emotional language by ignoring the sequential nature of the text, and the context. These methods, therefore, are not sufficient to create an applicable and generalizable emotion detection methodology. Understanding these limitations, we present a new network based on a bidirectional GRU model to show that capturing more meaningful information from text can significantly improve the performance of these models. The results show significant improvement with an average of 26.8 point increase in F-measure on our test data and 38.6 increase on the totally new dataset. | {
"paragraphs": [
[
"There have been many advances in machine learning methods which help machines understand human behavior better than ever. One of the most important aspects of human behavior is emotion. If machines could detect human emotional expressions, it could be used to improve on verity of applications such as marketing BIBREF0 , human-computer interactions BIBREF1 , political science BIBREF2 etc.",
"Emotion in humans is complex and hard to distinguish. There have been many emotional models in psychology which tried to classify and point out basic human emotions such as Ekman's 6 basic emotions BIBREF3 , Plutchik's wheel of emotions BIBREF4 , or Parrott's three-level categorization of emotions BIBREF5 . These varieties show that emotions are hard to define, distinguish, and categorize even for human experts.",
"By adding the complexity of language and the fact that emotion expressions are very complex and context dependant BIBREF6 , BIBREF7 , BIBREF8 , we can see why detecting emotions in textual data is a challenging task. This difficulty can be seen when human annotators try to assign emotional labels to the text, but using various techniques the annotation task can be accomplished with desirable agreement among the annotators BIBREF9 ."
],
[
"A lot of work has been done on detecting emotion in speech or visual data BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 . But detecting emotions in textual data is a relatively new area that demands more research. There have been many attempts to detect emotions in text using conventional machine learning techniques and handcrafted features in which given the dataset, the authors try to find the best feature set that represents the most and the best information about the text, then passing the converted text as feature vectors to the classifier for training BIBREF14 , BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 , BIBREF19 , BIBREF20 , BIBREF21 , BIBREF22 , BIBREF23 , BIBREF24 , BIBREF25 , BIBREF26 . During the process of creating the feature set, in these methods, some of the most important information in the text such as the sequential nature of the data, and the context will be lost.",
"Considering the complexity of the task, and the fact that these models lose a lot of information by using simpler models such as the bag of words model (BOW) or lexicon features, these attempts lead to methods which are not reusable and generalizable. Further improvement in classification algorithms, and trying out new paths is necessary in order to improve the performance of emotion detection methods. Some suggestions that were less present in the literature, are to develop methods that go above lexical representations and consider the flow of the language.",
"Due to this sequential nature, recurrent and convolutional neural networks have been used in many NLP tasks and were able to improve the performance in a variety of classification tasks BIBREF27 , BIBREF28 , BIBREF29 , BIBREF30 . There have been very few works in using deep neural network for emotion detection in text BIBREF31 , BIBREF32 . These models can capture the complexity an context of the language better not only by keeping the sequential information but also by creating hidden representation for the text as a whole and by learning the important features without any additional (and often incomplete) human-designed features.",
"In this work, we argue that creating a model that can better capture the context and sequential nature of text , can significantly improve the performance in the hard task of emotion detection. We show this by using a recurrent neural network-based classifier that can learn to create a more informative latent representation of the target text as a whole, and we show that this can improve the final performance significantly. Based on that, we suggest focusing on methodologies that increase the quality of these latent representations both contextually and emotionally, can improve the performance of these models. Based on this assumption we propose a deep recurrent neural network architecture to detect discrete emotions in a tweet dataset. The code can be accessed at GitHub [https://github.com/armintabari/Emotion-Detection-RNN]."
],
[
"We compare our approach to two other, the first one uses almost the same tweet data as we use for training, and the second one is the CrowdFlower dataset annotated for emotions.",
"In the first one Wang et al. BIBREF21 downloaded over 5M tweets which included one of 131 emotional hashtags based on Parrott's three-level categorization of emotions in seven categories: joy, sadness, anger, love, fear, thankfulness, surprise. To assess the quality of using hashtags as labels, the sampled 400 tweets randomly and after comparing human annotations by hashtag labels they came up with simple heuristics to increase the quality of labeling by ignoring tweets with quotations and URLs and only keeping tweets with 5 terms or more that have the emotional hashtags at the end of the tweets. Using these rules they extracted around 2.5M tweets. After sampling another 400 random tweets and comparing it to human annotation the saw that hashtags can classify the tweets with 95% precision. They did some pre-processing by making all words lower-case, replaced user mentions with @user, replaced letters/punctuation that is repeated more than twice with the same two letters/punctuation (e.g., ooooh INLINEFORM0 ooh, !!!!! INLINEFORM1 !!); normalized some frequently used informal expressions (e.g., ll → will, dnt INLINEFORM2 do not); and stripped hash symbols. They used a sub-sample of their dataset to figure out the best approaches for classification, and after trying two different classifiers (multinomial Naive Bayes and LIBLINEAR) and 12 different feature sets, they got their best results using logistic regression branch for LIBLINEAR classifier and a feature set consist of n-gram(n=1,2), LIWC and MPQA lexicons, WordNet-Affect and POS tags.",
"In the second one, the reported results are from a paper by BIBREF33 in which they used maximum entropy classifier with bag of words model to classify various emotional datasets. Here we only report part of their result for CrowdFlower dataset that can be mapped to one of our seven labels."
],
[
"There are not many free datasets available for emotion classification. Most datasets are subject-specific (i.e. news headlines, fairy tails, etc.) and not big enough to train deep neural networks. Here we use the tweet dataset created by Wang et al. As mentioned in the previous section, they have collected over 2 million tweets by using hashtags for labeling their data. They created a list of words associated with 7 emotions (six emotions from BIBREF34 love, joy, surprise, anger, sadness fear plus thankfulness (See Table TABREF3 ), and used the list as their guide to label the sampled tweets with acceptable quality.",
"After pre-processing, they have used 250k tweets as the test set, around 250k as development test and the rest of the data (around 2M) as training data. their best results using LIBLINEAR classifier and a feature set containing n-gram(n=1,2), LIWC and MPQA lexicons, WordNet-Affect and POS tags can be seen in Table TABREF4 . It can be seen that their best results were for high count emotions like joy and sadness as high as 72.1 in F-measure and worst result was for a low count emotion surprise with F-measure of 13.9.",
"As Twitter is against polishing this many tweets, Wang et al. provided the tweet ids along with their label. For our experiment, we retrieved the tweets in Wang et al.'s dataset by tweet IDs. As the dataset is from 7 years ago We could only download over 1.3 million tweets from around 2.5M tweet IDs in the dataset. The distribution of the data can be seen in Table TABREF5 .",
"In our experiment, we used simpler pre-processing steps which will be explained later on in the \"Experiment\" section."
],
[
"In this section, we introduce the deep neural network architecture that we used to classify emotions in the tweets dataset. Emotional expressions are more complex and context-dependent even compared to other forms of expressions based mostly on the complexity and ambiguity of human emotions and emotional expressions and the huge impact of context on the understanding of the expressed emotion. These complexities are what led us to believe lexicon-based features like is normally used in conventional machine learning approaches are unable to capture the intricacy of emotional expressions.",
"Our architecture was designed to show that using a model that captures better information about the context and sequential nature of the text can outperform lexicon-based methods commonly used in the literature. As mentioned in the Introduction, Recurrent Neural Networks (RNNs) have been shown to perform well for the verity of tasks in NLP, especially classification tasks. And as our goal was to capture more information about the context and sequential nature of the text, we decided to use a model based on bidirectional RNN, specifically a bidirectional GRU network to analyze the tweets.",
"For building the emotion classifier, we have decided to use 7 binary classifiers-one for each emotion- each of which uses the same architecture for detecting a specific emotion. You can see the plot diagram of the model in Figure FIGREF6 . The first layer consists of an embedding lookup layer that will not change during training and will be used to convert each term to its corresponding embedding vector. In our experiments, we tried various word embedding models but saw little difference in their performance. Here we report the results for two which had the best performance among all, ConceptNet Numberbatch BIBREF35 and fastText BIBREF36 both had 300 dimensions.",
"As none of our tweets had more than 35 terms, we set the size of the embedding layer to 35 and added padding to shorter tweets. The output of this layer goes to a bidirectional GRU layer selected to capture the entirety of each tweet before passing its output forward. The goal is to create an intermediate representation for the tweets that capture the sequential nature of the data. For the next step, we use a concatenation of global max-pooling and average-pooling layers (with a window size of two). Then a max-pooling was used to extract the most important features form the GRU output and an average-pooling layer was used to considers all features to create a representation for the text as a whole. These partial representations are then were concatenated to create out final hidden representation. For classification, the output of the concatenation is passed to a dense classification layer with 70 nodes along with a dropout layer with a rate of 50% to prevent over-fitting. The final layer is a sigmoid layer that generates the final output of the classifier returning the class probability."
],
[
"Minimal pre-processing was done by converting text to lower case, removing the hashtags at the end of tweets and separating each punctuation from the connected token (e.g., awesome!! INLINEFORM0 awesome !!) and replacing comma and new-line characters with white space. The text, then, was tokenized using TensorFlow-Keras tokenizer. Top N terms were selected and added to our dictionary where N=100k for higher count emotions joy, sadness, anger, love and N=50k for thankfulness and fear and N=25k for surprise. Seven binary classifiers were trained for the seven emotions with a batch size of 250 and for 20 epochs with binary cross-entropy as the objective function and Adam optimizer. The architecture of the model can be seen in Figure FIGREF6 . For training each classifier, a balanced dataset was created with selecting all tweets from the target set as class 1 and a random sample of the same size from other classes as class 0. For each classifier, 80% of the data was randomly selected as the training set, and 10% for the validation set, and 10% as the test set. As mentioned before we used the two embedding models, ConceptNet Numberbatch and fastText as the two more modern pre-trained word vector spaces to see how changing the embedding layer can affect the performance. The result of comparison among different embeddings can be seen in Table TABREF10 . It can be seen that the best performance was divided between the two embedding models with minor performance variations.",
"The comparison of our result with Wang et al. can be seen in Table TABREF9 . as shown, the results from our model shows significant improvement from 10% increase in F-measure for a high count emotion joy up to 61.7 point increase in F-measure for a low count emotion surprise. on average we showed 26.8 point increase in F-measure for all categories and more interestingly our result shows very little variance between different emotions compare to results reported by Wang et al."
],
[
"To asses the performance of these models on a totally unseen data, we tried to classify the CrowdFlower emotional tweets dataset. The CrowdFlower dataset consists of 40k tweets annotated via crowd-sourcing each with a single emotional label. This dataset is considered a hard dataset to classify with a lot of noise. The distribution of the dataset can be seen in Table TABREF18 . The labeling on this dataset is non-standard, so we used the following mapping for labels:",
"sadness INLINEFORM0 sadness",
"worry INLINEFORM0 fear",
"happiness INLINEFORM0 joy",
"love INLINEFORM0 love",
"surprise INLINEFORM0 surprise",
"anger INLINEFORM0 anger",
"We then classified emotions using the pre-trained models and emotionally fitted fastText embedding. The result can be seen in Table TABREF19 . The baseline results are from BIBREF33 done using BOW model and maximum entropy classifier. We saw a huge improvement from 26 point increase in F-measure for the emotion joy (happiness) up to 57 point increase for surprise with total average increase of 38.6 points. Bostan and Klinger did not report classification results for the emotion love so we did not include it in the average. These results show that our trained models perform exceptionally on a totally new dataset with a different method of annotation."
],
[
"In this paper, we have shown that using the designed RNN based network we could increase the performance of classification dramatically. We showed that keeping the sequential nature of the data can be hugely beneficial when working with textual data especially faced with the hard task of detecting more complex phenomena like emotions. We accomplished that by using a recurrent network in the process of generating our hidden representation. We have also used a max-pooling layer to capture the most relevant features and an average pooling layer to capture the text as a whole proving that we can achieve better performance by focusing on creating a more informative hidden representation. In future we can focus on improving these representations for example by using attention networks BIBREF37 , BIBREF38 to capture a more contextual representation or using language model based methods like BERT BIBREF39 that has been shown very successful in various NLP tasks."
]
],
"section_name": [
"Introduction",
"Related Work",
"Baseline Approaches",
"Data and preparation",
"Model",
"Experiment",
"Model Performances on New Dataset",
"Conclusion and Future Work"
]
} | {
"answers": [
{
"annotation_id": [
"8d92b49a95bcbdfcd1485d4e4f7b0955fa73bab7",
"97518b1566e90b4152f77d7ea14572cef8c6d180"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 2: Results of final classification in Wang et al."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: Results of final classification in Wang et al."
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"6947a0564c9e7caa95a27c7ca9df273fd277d691"
],
"answer": [
{
"evidence": [
"Minimal pre-processing was done by converting text to lower case, removing the hashtags at the end of tweets and separating each punctuation from the connected token (e.g., awesome!! INLINEFORM0 awesome !!) and replacing comma and new-line characters with white space. The text, then, was tokenized using TensorFlow-Keras tokenizer. Top N terms were selected and added to our dictionary where N=100k for higher count emotions joy, sadness, anger, love and N=50k for thankfulness and fear and N=25k for surprise. Seven binary classifiers were trained for the seven emotions with a batch size of 250 and for 20 epochs with binary cross-entropy as the objective function and Adam optimizer. The architecture of the model can be seen in Figure FIGREF6 . For training each classifier, a balanced dataset was created with selecting all tweets from the target set as class 1 and a random sample of the same size from other classes as class 0. For each classifier, 80% of the data was randomly selected as the training set, and 10% for the validation set, and 10% as the test set. As mentioned before we used the two embedding models, ConceptNet Numberbatch and fastText as the two more modern pre-trained word vector spaces to see how changing the embedding layer can affect the performance. The result of comparison among different embeddings can be seen in Table TABREF10 . It can be seen that the best performance was divided between the two embedding models with minor performance variations.",
"FLOAT SELECTED: Figure 1: Bidirectional GRU architecture used in our experiment."
],
"extractive_spans": [],
"free_form_answer": "They use the embedding layer with a size 35 and embedding dimension of 300. They use a dense layer with 70 units and a dropout layer with a rate of 50%.",
"highlighted_evidence": [
"The architecture of the model can be seen in Figure FIGREF6 .",
"FLOAT SELECTED: Figure 1: Bidirectional GRU architecture used in our experiment."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"43a6fcb8b153e953a3f842e7b06ddbf8b48efca3",
"a7a9bbd623aa8c29ec2f1a381ac8893a99c263ef"
],
"answer": [
{
"evidence": [
"We compare our approach to two other, the first one uses almost the same tweet data as we use for training, and the second one is the CrowdFlower dataset annotated for emotions.",
"In the first one Wang et al. BIBREF21 downloaded over 5M tweets which included one of 131 emotional hashtags based on Parrott's three-level categorization of emotions in seven categories: joy, sadness, anger, love, fear, thankfulness, surprise. To assess the quality of using hashtags as labels, the sampled 400 tweets randomly and after comparing human annotations by hashtag labels they came up with simple heuristics to increase the quality of labeling by ignoring tweets with quotations and URLs and only keeping tweets with 5 terms or more that have the emotional hashtags at the end of the tweets. Using these rules they extracted around 2.5M tweets. After sampling another 400 random tweets and comparing it to human annotation the saw that hashtags can classify the tweets with 95% precision. They did some pre-processing by making all words lower-case, replaced user mentions with @user, replaced letters/punctuation that is repeated more than twice with the same two letters/punctuation (e.g., ooooh INLINEFORM0 ooh, !!!!! INLINEFORM1 !!); normalized some frequently used informal expressions (e.g., ll → will, dnt INLINEFORM2 do not); and stripped hash symbols. They used a sub-sample of their dataset to figure out the best approaches for classification, and after trying two different classifiers (multinomial Naive Bayes and LIBLINEAR) and 12 different feature sets, they got their best results using logistic regression branch for LIBLINEAR classifier and a feature set consist of n-gram(n=1,2), LIWC and MPQA lexicons, WordNet-Affect and POS tags.",
"In the second one, the reported results are from a paper by BIBREF33 in which they used maximum entropy classifier with bag of words model to classify various emotional datasets. Here we only report part of their result for CrowdFlower dataset that can be mapped to one of our seven labels."
],
"extractive_spans": [
" Wang et al. BIBREF21",
"paper by BIBREF33 in which they used maximum entropy classifier with bag of words model to classify various emotional datasets"
],
"free_form_answer": "",
"highlighted_evidence": [
"We compare our approach to two other, the first one uses almost the same tweet data as we use for training, and the second one is the CrowdFlower dataset annotated for emotions.\n\nIn the first one Wang et al. BIBREF21 downloaded over 5M tweets which included one of 131 emotional hashtags based on Parrott's three-level categorization of emotions in seven categories: joy, sadness, anger, love, fear, thankfulness, surprise. ",
"In the second one, the reported results are from a paper by BIBREF33 in which they used maximum entropy classifier with bag of words model to classify various emotional datasets. Here we only report part of their result for CrowdFlower dataset that can be mapped to one of our seven labels."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In the first one Wang et al. BIBREF21 downloaded over 5M tweets which included one of 131 emotional hashtags based on Parrott's three-level categorization of emotions in seven categories: joy, sadness, anger, love, fear, thankfulness, surprise. To assess the quality of using hashtags as labels, the sampled 400 tweets randomly and after comparing human annotations by hashtag labels they came up with simple heuristics to increase the quality of labeling by ignoring tweets with quotations and URLs and only keeping tweets with 5 terms or more that have the emotional hashtags at the end of the tweets. Using these rules they extracted around 2.5M tweets. After sampling another 400 random tweets and comparing it to human annotation the saw that hashtags can classify the tweets with 95% precision. They did some pre-processing by making all words lower-case, replaced user mentions with @user, replaced letters/punctuation that is repeated more than twice with the same two letters/punctuation (e.g., ooooh INLINEFORM0 ooh, !!!!! INLINEFORM1 !!); normalized some frequently used informal expressions (e.g., ll → will, dnt INLINEFORM2 do not); and stripped hash symbols. They used a sub-sample of their dataset to figure out the best approaches for classification, and after trying two different classifiers (multinomial Naive Bayes and LIBLINEAR) and 12 different feature sets, they got their best results using logistic regression branch for LIBLINEAR classifier and a feature set consist of n-gram(n=1,2), LIWC and MPQA lexicons, WordNet-Affect and POS tags.",
"In the second one, the reported results are from a paper by BIBREF33 in which they used maximum entropy classifier with bag of words model to classify various emotional datasets. Here we only report part of their result for CrowdFlower dataset that can be mapped to one of our seven labels."
],
"extractive_spans": [
"Wang et al. ",
"maximum entropy classifier with bag of words model"
],
"free_form_answer": "",
"highlighted_evidence": [
"In the first one Wang et al. BIBREF21 downloaded over 5M tweets which included one of 131 emotional hashtags based on Parrott's three-level categorization of emotions in seven categories: joy, sadness, anger, love, fear, thankfulness, surprise. ",
"In the second one, the reported results are from a paper by BIBREF33 in which they used maximum entropy classifier with bag of words model to classify various emotional datasets. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"9fde72851a7a5e62ff27030e230c491f9122898e",
"c692b485df24beb6dbc96939d251b0125f7c0b48"
],
"answer": [
{
"evidence": [
"We compare our approach to two other, the first one uses almost the same tweet data as we use for training, and the second one is the CrowdFlower dataset annotated for emotions.",
"In the first one Wang et al. BIBREF21 downloaded over 5M tweets which included one of 131 emotional hashtags based on Parrott's three-level categorization of emotions in seven categories: joy, sadness, anger, love, fear, thankfulness, surprise. To assess the quality of using hashtags as labels, the sampled 400 tweets randomly and after comparing human annotations by hashtag labels they came up with simple heuristics to increase the quality of labeling by ignoring tweets with quotations and URLs and only keeping tweets with 5 terms or more that have the emotional hashtags at the end of the tweets. Using these rules they extracted around 2.5M tweets. After sampling another 400 random tweets and comparing it to human annotation the saw that hashtags can classify the tweets with 95% precision. They did some pre-processing by making all words lower-case, replaced user mentions with @user, replaced letters/punctuation that is repeated more than twice with the same two letters/punctuation (e.g., ooooh INLINEFORM0 ooh, !!!!! INLINEFORM1 !!); normalized some frequently used informal expressions (e.g., ll → will, dnt INLINEFORM2 do not); and stripped hash symbols. They used a sub-sample of their dataset to figure out the best approaches for classification, and after trying two different classifiers (multinomial Naive Bayes and LIBLINEAR) and 12 different feature sets, they got their best results using logistic regression branch for LIBLINEAR classifier and a feature set consist of n-gram(n=1,2), LIWC and MPQA lexicons, WordNet-Affect and POS tags."
],
"extractive_spans": [
"Wang et al.",
"CrowdFlower dataset "
],
"free_form_answer": "",
"highlighted_evidence": [
"We compare our approach to two other, the first one uses almost the same tweet data as we use for training, and the second one is the CrowdFlower dataset annotated for emotions.",
"In the first one Wang et al. BIBREF21 downloaded over 5M tweets which included one of 131 emotional hashtags based on Parrott's three-level categorization of emotions in seven categories: joy, sadness, anger, love, fear, thankfulness, surprise. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"There are not many free datasets available for emotion classification. Most datasets are subject-specific (i.e. news headlines, fairy tails, etc.) and not big enough to train deep neural networks. Here we use the tweet dataset created by Wang et al. As mentioned in the previous section, they have collected over 2 million tweets by using hashtags for labeling their data. They created a list of words associated with 7 emotions (six emotions from BIBREF34 love, joy, surprise, anger, sadness fear plus thankfulness (See Table TABREF3 ), and used the list as their guide to label the sampled tweets with acceptable quality.",
"In the second one, the reported results are from a paper by BIBREF33 in which they used maximum entropy classifier with bag of words model to classify various emotional datasets. Here we only report part of their result for CrowdFlower dataset that can be mapped to one of our seven labels."
],
"extractive_spans": [
" tweet dataset created by Wang et al. ",
"CrowdFlower dataset"
],
"free_form_answer": "",
"highlighted_evidence": [
"Here we use the tweet dataset created by Wang et al. ",
"Here we only report part of their result for CrowdFlower dataset that can be mapped to one of our seven labels."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"904b4500cdfd9cf60b54d3e072a7779262f0c9cc",
"9e66739dcd1d8a859907320b7b73c55986c6fa4b"
],
"answer": [
{
"evidence": [
"Our architecture was designed to show that using a model that captures better information about the context and sequential nature of the text can outperform lexicon-based methods commonly used in the literature. As mentioned in the Introduction, Recurrent Neural Networks (RNNs) have been shown to perform well for the verity of tasks in NLP, especially classification tasks. And as our goal was to capture more information about the context and sequential nature of the text, we decided to use a model based on bidirectional RNN, specifically a bidirectional GRU network to analyze the tweets."
],
"extractive_spans": [
" the context and sequential nature of the text"
],
"free_form_answer": "",
"highlighted_evidence": [
"Our architecture was designed to show that using a model that captures better information about the context and sequential nature of the text can outperform lexicon-based methods commonly used in the literature."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Our architecture was designed to show that using a model that captures better information about the context and sequential nature of the text can outperform lexicon-based methods commonly used in the literature. As mentioned in the Introduction, Recurrent Neural Networks (RNNs) have been shown to perform well for the verity of tasks in NLP, especially classification tasks. And as our goal was to capture more information about the context and sequential nature of the text, we decided to use a model based on bidirectional RNN, specifically a bidirectional GRU network to analyze the tweets."
],
"extractive_spans": [
"information about the context and sequential nature of the text"
],
"free_form_answer": "",
"highlighted_evidence": [
"Our architecture was designed to show that using a model that captures better information about the context and sequential nature of the text can outperform lexicon-based methods commonly used in the literature."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five",
"five"
],
"paper_read": [
"",
"",
"",
"",
""
],
"question": [
"Do they report results only on English data?",
"What are the hyperparameters of the bi-GRU?",
"What baseline is used?",
"What data is used in experiments?",
"What meaningful information does the GRU model capture, which traditional ML models do not?"
],
"question_id": [
"fd2c6c26fd0ab3c10aae4f2550c5391576a77491",
"6b6d498546f856ac20958f666fc3fd55811347e2",
"de3b1145cb4111ea2d4e113f816b537d052d9814",
"132f752169adf6dc5ade3e4ca773c11044985da4",
"1d9aeeaa6efa1367c22be0718f5a5635a73844bd"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"search_query": [
"",
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
"",
""
]
} | {
"caption": [
"Table 1: Statistics in the original dataset from Wang et al.",
"Table 2: Results of final classification in Wang et al.",
"Figure 1: Bidirectional GRU architecture used in our experiment.",
"Table 3: Statistics in the downloaded dataset from Wang et al (2012). This is the main dataset used for training the model.",
"Table 5: Results of classification using two embedding models and bidirectional GRU. No meaningful differences was seen between the two models. Reported numbers are F1measures.",
"Table 4: Results of classification using bidirectional GRU. Reported numbers are F1-measures.",
"Table 6: Distribution of labels in CrowdFlower dataset.",
"Table 7: Results from classifying CrowdFlower data using pre-trained model. Reported numbers are F1-measure."
],
"file": [
"2-Table1-1.png",
"2-Table2-1.png",
"3-Figure1-1.png",
"3-Table3-1.png",
"4-Table5-1.png",
"4-Table4-1.png",
"5-Table6-1.png",
"5-Table7-1.png"
]
} | [
"What are the hyperparameters of the bi-GRU?"
] | [
[
"1907.09369-Experiment-0",
"1907.09369-3-Figure1-1.png"
]
] | [
"They use the embedding layer with a size 35 and embedding dimension of 300. They use a dense layer with 70 units and a dropout layer with a rate of 50%."
] | 138 |
1911.07555 | Short Text Language Identification for Under Resourced Languages | The paper presents a hierarchical naive Bayesian and lexicon based classifier for short text language identification (LID) useful for under resourced languages. The algorithm is evaluated on short pieces of text for the 11 official South African languages some of which are similar languages. The algorithm is compared to recent approaches using test sets from previous works on South African languages as well as the Discriminating between Similar Languages (DSL) shared tasks' datasets. Remaining research opportunities and pressing concerns in evaluating and comparing LID approaches are also discussed. | {
"paragraphs": [
[
"Accurate language identification (LID) is the first step in many natural language processing and machine comprehension pipelines. If the language of a piece of text is known then the appropriate downstream models like parts of speech taggers and language models can be applied as required.",
"LID is further also an important step in harvesting scarce language resources. Harvested data can be used to bootstrap more accurate LID models and in doing so continually improve the quality of the harvested data. Availability of data is still one of the big roadblocks for applying data driven approaches like supervised machine learning in developing countries.",
"Having 11 official languages of South Africa has lead to initiatives (discussed in the next section) that have had positive effect on the availability of language resources for research. However, many of the South African languages are still under resourced from the point of view of building data driven models for machine comprehension and process automation.",
"Table TABREF2 shows the percentages of first language speakers for each of the official languages of South Africa. These are four conjunctively written Nguni languages (zul, xho, nbl, ssw), Afrikaans (afr) and English (eng), three disjunctively written Sotho languages (nso, sot, tsn), as well as tshiVenda (ven) and Xitsonga (tso). The Nguni languages are similar to each other and harder to distinguish. The same is true of the Sotho languages.",
"This paper presents a hierarchical naive Bayesian and lexicon based classifier for LID of short pieces of text of 15-20 characters long. The algorithm is evaluated against recent approaches using existing test sets from previous works on South African languages as well as the Discriminating between Similar Languages (DSL) 2015 and 2017 shared tasks.",
"Section SECREF2 reviews existing works on the topic and summarises the remaining research problems. Section SECREF3 of the paper discusses the proposed algorithm and Section SECREF4 presents comparative results."
],
[
"The focus of this section is on recently published datasets and LID research applicable to the South African context. An in depth survey of algorithms, features, datasets, shared tasks and evaluation methods may be found in BIBREF0.",
"The datasets for the DSL 2015 & DSL 2017 shared tasks BIBREF1 are often used in LID benchmarks and also available on Kaggle . The DSL datasets, like other LID datasets, consists of text sentences labelled by language. The 2017 dataset, for example, contains 14 languages over 6 language groups with 18000 training samples and 1000 testing samples per language.",
"The recently published JW300 parallel corpus BIBREF2 covers over 300 languages with around 100 thousand parallel sentences per language pair on average. In South Africa, a multilingual corpus of academic texts produced by university students with different mother tongues is being developed BIBREF3. The WiLI-2018 benchmark dataset BIBREF4 for monolingual written natural language identification includes around 1000 paragraphs of 235 languages. A possibly useful link can also be made BIBREF5 between Native Language Identification (NLI) (determining the native language of the author of a text) and Language Variety Identification (LVI) (classification of different varieties of a single language) which opens up more datasets. The Leipzig Corpora Collection BIBREF6, the Universal Declaration of Human Rights and Tatoeba are also often used sources of data.",
"The NCHLT text corpora BIBREF7 is likely a good starting point for a shared LID task dataset for the South African languages BIBREF8. The NCHLT text corpora contains enough data to have 3500 training samples and 600 testing samples of 300+ character sentences per language. Researchers have recently started applying existing algorithms for tasks like neural machine translation in earnest to such South African language datasets BIBREF9.",
"Existing NLP datasets, models and services BIBREF10 are available for South African languages. These include an LID algorithm BIBREF11 that uses a character level n-gram language model. Multiple papers have shown that 'shallow' naive Bayes classifiers BIBREF12, BIBREF8, BIBREF13, BIBREF14, SVMs BIBREF15 and similar models work very well for doing LID. The DSL 2017 paper BIBREF1, for example, gives an overview of the solutions of all of the teams that competed on the shared task and the winning approach BIBREF16 used an SVM with character n-gram, parts of speech tag features and some other engineered features. The winning approach for DSL 2015 used an ensemble naive Bayes classifier. The fasttext classifier BIBREF17 is perhaps one of the best known efficient 'shallow' text classifiers that have been used for LID .",
"Multiple papers have proposed hierarchical stacked classifiers (including lexicons) that would for example first classify a piece of text by language group and then by exact language BIBREF18, BIBREF19, BIBREF8, BIBREF0. Some work has also been done on classifying surnames between Tshivenda, Xitsonga and Sepedi BIBREF20. Additionally, data augmentation BIBREF21 and adversarial training BIBREF22 approaches are potentially very useful to reduce the requirement for data.",
"Researchers have investigated deeper LID models like bidirectional recurrent neural networks BIBREF23 or ensembles of recurrent neural networks BIBREF24. The latter is reported to achieve 95.12% in the DSL 2015 shared task. In these models text features can include character and word n-grams as well as informative character and word-level features learnt BIBREF25 from the training data. The neural methods seem to work well in tasks where more training data is available.",
"In summary, LID of short texts, informal styles and similar languages remains a difficult problem which is actively being researched. Increased confusion can in general be expected between shorter pieces of text and languages that are more closely related. Shallow methods still seem to work well compared to deeper models for LID. Other remaining research opportunities seem to be data harvesting, building standardised datasets and creating shared tasks for South Africa and Africa. Support for language codes that include more languages seems to be growing and discoverability of research is improving with more survey papers coming out. Paywalls also seem to no longer be a problem; the references used in this paper was either openly published or available as preprint papers."
],
[
"The proposed LID algorithm builds on the work in BIBREF8 and BIBREF26. We apply a naive Bayesian classifier with character (2, 4 & 6)-grams, word unigram and word bigram features with a hierarchical lexicon based classifier.",
"The naive Bayesian classifier is trained to predict the specific language label of a piece of text, but used to first classify text as belonging to either the Nguni family, the Sotho family, English, Afrikaans, Xitsonga or Tshivenda. The scikit-learn multinomial naive Bayes classifier is used for the implementation with an alpha smoothing value of 0.01 and hashed text features.",
"The lexicon based classifier is then used to predict the specific language within a language group. For the South African languages this is done for the Nguni and Sotho groups. If the lexicon prediction of the specific language has high confidence then its result is used as the final label else the naive Bayesian classifier's specific language prediction is used as the final result. The lexicon is built over all the data and therefore includes the vocabulary from both the training and testing sets.",
"The lexicon based classifier is designed to trade higher precision for lower recall. The proposed implementation is considered confident if the number of words from the winning language is at least one more than the number of words considered to be from the language scored in second place.",
"The stacked classifier is tested against three public LID implementations BIBREF17, BIBREF23, BIBREF8. The LID implementation described in BIBREF17 is available on GitHub and is trained and tested according to a post on the fasttext blog. Character (5-6)-gram features with 16 dimensional vectors worked the best. The implementation discussed in BIBREF23 is available from https://github.com/tomkocmi/LanideNN. Following the instructions for an OSX pip install of an old r0.8 release of TensorFlow, the LanideNN code could be executed in Python 3.7.4. Settings were left at their defaults and a learning rate of 0.001 was used followed by a refinement with learning rate of 0.0001. Only one code modification was applied to return the results from a method that previously just printed to screen. The LID algorithm described in BIBREF8 is also available on GitHub.",
"The stacked classifier is also tested against the results reported for four other algorithms BIBREF16, BIBREF26, BIBREF24, BIBREF15. All the comparisons are done using the NCHLT BIBREF7, DSL 2015 BIBREF19 and DSL 2017 BIBREF1 datasets discussed in Section SECREF2."
],
[
"The average classification accuracy results are summarised in Table TABREF9. The accuracies reported are for classifying a piece of text by its specific language label. Classifying text only by language group or family is a much easier task as reported in BIBREF8.",
"Different variations of the proposed classifier were evaluated. A single NB classifier (NB), a stack of two NB classifiers (NB+NB), a stack of a NB classifier and lexicon (NB+Lex) and a lexicon (Lex) by itself. A lexicon with a 50% training token dropout is also listed to show the impact of the lexicon support on the accuracy.",
"From the results it seems that the DSL 2017 task might be harder than the DSL 2015 and NCHLT tasks. Also, the results for the implementation discussed in BIBREF23 might seem low, but the results reported in that paper is generated on longer pieces of text so lower scores on the shorter pieces of text derived from the NCHLT corpora is expected.",
"The accuracy of the proposed algorithm seems to be dependent on the support of the lexicon. Without a good lexicon a non-stacked naive Bayesian classifier might even perform better.",
"The execution performance of some of the LID implementations are shown in Table TABREF10. Results were generated on an early 2015 13-inch Retina MacBook Pro with a 2.9 GHz CPU (Turbo Boosted to 3.4 GHz) and 8GB RAM. The C++ implementation in BIBREF17 is the fastest. The implementation in BIBREF8 makes use of un-hashed feature representations which causes it to be slower than the proposed sklearn implementation. The execution performance of BIBREF23 might improve by a factor of five to ten when executed on a GPU."
],
[
"LID of short texts, informal styles and similar languages remains a difficult problem which is actively being researched. The proposed algorithm was evaluated on three existing datasets and compared to the implementations of three public LID implementations as well as to reported results of four other algorithms. It performed well relative to the other methods beating their results. However, the performance is dependent on the support of the lexicon.",
"We would like to investigate the value of a lexicon in a production system and how to possibly maintain it using self-supervised learning. We are investigating the application of deeper language models some of which have been used in more recent DSL shared tasks. We would also like to investigate data augmentation strategies to reduce the amount of training data that is required.",
"Further research opportunities include data harvesting, building standardised datasets and shared tasks for South Africa as well as the rest of Africa. In general, the support for language codes that include more languages seems to be growing, discoverability of research is improving and paywalls seem to no longer be a big problem in getting access to published research."
]
],
"section_name": [
"Introduction",
"Related Works",
"Methodology",
"Results and Analysis",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"0d57db99bd08c08fae9de542f4dec591cc91d0d9",
"9526eb3b524f649392369fbd540d603fcc9f5d88"
],
"answer": [
{
"evidence": [
"Existing NLP datasets, models and services BIBREF10 are available for South African languages. These include an LID algorithm BIBREF11 that uses a character level n-gram language model. Multiple papers have shown that 'shallow' naive Bayes classifiers BIBREF12, BIBREF8, BIBREF13, BIBREF14, SVMs BIBREF15 and similar models work very well for doing LID. The DSL 2017 paper BIBREF1, for example, gives an overview of the solutions of all of the teams that competed on the shared task and the winning approach BIBREF16 used an SVM with character n-gram, parts of speech tag features and some other engineered features. The winning approach for DSL 2015 used an ensemble naive Bayes classifier. The fasttext classifier BIBREF17 is perhaps one of the best known efficient 'shallow' text classifiers that have been used for LID .",
"Multiple papers have proposed hierarchical stacked classifiers (including lexicons) that would for example first classify a piece of text by language group and then by exact language BIBREF18, BIBREF19, BIBREF8, BIBREF0. Some work has also been done on classifying surnames between Tshivenda, Xitsonga and Sepedi BIBREF20. Additionally, data augmentation BIBREF21 and adversarial training BIBREF22 approaches are potentially very useful to reduce the requirement for data.",
"Researchers have investigated deeper LID models like bidirectional recurrent neural networks BIBREF23 or ensembles of recurrent neural networks BIBREF24. The latter is reported to achieve 95.12% in the DSL 2015 shared task. In these models text features can include character and word n-grams as well as informative character and word-level features learnt BIBREF25 from the training data. The neural methods seem to work well in tasks where more training data is available."
],
"extractive_spans": [
"'shallow' naive Bayes",
"SVM",
"hierarchical stacked classifiers",
"bidirectional recurrent neural networks"
],
"free_form_answer": "",
"highlighted_evidence": [
"Multiple papers have shown that 'shallow' naive Bayes classifiers BIBREF12, BIBREF8, BIBREF13, BIBREF14, SVMs BIBREF15 and similar models work very well for doing LID. The DSL 2017 paper BIBREF1, for example, gives an overview of the solutions of all of the teams that competed on the shared task and the winning approach BIBREF16 used an SVM with character n-gram, parts of speech tag features and some other engineered features. The winning approach for DSL 2015 used an ensemble naive Bayes classifier. The fasttext classifier BIBREF17 is perhaps one of the best known efficient 'shallow' text classifiers that have been used for LID .",
"Multiple papers have proposed hierarchical stacked classifiers (including lexicons) that would for example first classify a piece of text by language group and then by exact language BIBREF18, BIBREF19, BIBREF8, BIBREF0.",
"Researchers have investigated deeper LID models li"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Existing NLP datasets, models and services BIBREF10 are available for South African languages. These include an LID algorithm BIBREF11 that uses a character level n-gram language model. Multiple papers have shown that 'shallow' naive Bayes classifiers BIBREF12, BIBREF8, BIBREF13, BIBREF14, SVMs BIBREF15 and similar models work very well for doing LID. The DSL 2017 paper BIBREF1, for example, gives an overview of the solutions of all of the teams that competed on the shared task and the winning approach BIBREF16 used an SVM with character n-gram, parts of speech tag features and some other engineered features. The winning approach for DSL 2015 used an ensemble naive Bayes classifier. The fasttext classifier BIBREF17 is perhaps one of the best known efficient 'shallow' text classifiers that have been used for LID .",
"Multiple papers have proposed hierarchical stacked classifiers (including lexicons) that would for example first classify a piece of text by language group and then by exact language BIBREF18, BIBREF19, BIBREF8, BIBREF0. Some work has also been done on classifying surnames between Tshivenda, Xitsonga and Sepedi BIBREF20. Additionally, data augmentation BIBREF21 and adversarial training BIBREF22 approaches are potentially very useful to reduce the requirement for data.",
"Researchers have investigated deeper LID models like bidirectional recurrent neural networks BIBREF23 or ensembles of recurrent neural networks BIBREF24. The latter is reported to achieve 95.12% in the DSL 2015 shared task. In these models text features can include character and word n-grams as well as informative character and word-level features learnt BIBREF25 from the training data. The neural methods seem to work well in tasks where more training data is available."
],
"extractive_spans": [
"BIBREF11 that uses a character level n-gram language model",
"'shallow' naive Bayes classifiers BIBREF12, BIBREF8, BIBREF13, BIBREF14, SVMs BIBREF15",
"BIBREF16 used an SVM with character n-gram, parts of speech tag features and some other engineered features",
"The winning approach for DSL 2015 used an ensemble naive Bayes classifier",
"The fasttext classifier BIBREF17",
"hierarchical stacked classifiers (including lexicons)",
"bidirectional recurrent neural networks BIBREF23 or ensembles of recurrent neural networks BIBREF24"
],
"free_form_answer": "",
"highlighted_evidence": [
"Existing NLP datasets, models and services BIBREF10 are available for South African languages. These include an LID algorithm BIBREF11 that uses a character level n-gram language model. Multiple papers have shown that 'shallow' naive Bayes classifiers BIBREF12, BIBREF8, BIBREF13, BIBREF14, SVMs BIBREF15 and similar models work very well for doing LID. The DSL 2017 paper BIBREF1, for example, gives an overview of the solutions of all of the teams that competed on the shared task and the winning approach BIBREF16 used an SVM with character n-gram, parts of speech tag features and some other engineered features. The winning approach for DSL 2015 used an ensemble naive Bayes classifier. The fasttext classifier BIBREF17 is perhaps one of the best known efficient 'shallow' text classifiers that have been used for LID .",
"Multiple papers have proposed hierarchical stacked classifiers (including lexicons) that would for example first classify a piece of text by language group and then by exact language BIBREF18, BIBREF19, BIBREF8, BIBREF0.",
"Researchers have investigated deeper LID models like bidirectional recurrent neural networks BIBREF23 or ensembles of recurrent neural networks BIBREF24. The latter is reported to achieve 95.12% in the DSL 2015 shared task."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"ceb8d56393e4f908ac45212dd82eafc9c217d323",
"f9fd7a3838b67063647c47c435953fca46c0cf6b"
],
"answer": [
{
"evidence": [
"The lexicon based classifier is then used to predict the specific language within a language group. For the South African languages this is done for the Nguni and Sotho groups. If the lexicon prediction of the specific language has high confidence then its result is used as the final label else the naive Bayesian classifier's specific language prediction is used as the final result. The lexicon is built over all the data and therefore includes the vocabulary from both the training and testing sets."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"The lexicon is built over all the data and therefore includes the vocabulary from both the training and testing sets."
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"The lexicon based classifier is then used to predict the specific language within a language group. For the South African languages this is done for the Nguni and Sotho groups. If the lexicon prediction of the specific language has high confidence then its result is used as the final label else the naive Bayesian classifier's specific language prediction is used as the final result. The lexicon is built over all the data and therefore includes the vocabulary from both the training and testing sets."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"The lexicon is built over all the data and therefore includes the vocabulary from both the training and testing sets."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"94076d78c3a81a9cb51b16b02340f8df58f639ef",
"e684d010871967e599ea1de3a66ca78fefd5e90c"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [
"The lexicon based classifier is then used to predict the specific language within a language group. For the South African languages this is done for the Nguni and Sotho groups. If the lexicon prediction of the specific language has high confidence then its result is used as the final label else the naive Bayesian classifier's specific language prediction is used as the final result. The lexicon is built over all the data and therefore includes the vocabulary from both the training and testing sets."
],
"extractive_spans": [
"built over all the data and therefore includes the vocabulary from both the training and testing sets"
],
"free_form_answer": "",
"highlighted_evidence": [
"The lexicon is built over all the data and therefore includes the vocabulary from both the training and testing sets."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"7a75e1c45c5b7e54f8d3bf675b477ea9c5948d5a",
"fb9f47f34736e65e1bc8fb3a4b515f02fe2ee9da"
],
"answer": [
{
"evidence": [
"The average classification accuracy results are summarised in Table TABREF9. The accuracies reported are for classifying a piece of text by its specific language label. Classifying text only by language group or family is a much easier task as reported in BIBREF8."
],
"extractive_spans": [
"average classification accuracy"
],
"free_form_answer": "",
"highlighted_evidence": [
"The average classification accuracy results are summarised in Table TABREF9."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The average classification accuracy results are summarised in Table TABREF9. The accuracies reported are for classifying a piece of text by its specific language label. Classifying text only by language group or family is a much easier task as reported in BIBREF8.",
"The execution performance of some of the LID implementations are shown in Table TABREF10. Results were generated on an early 2015 13-inch Retina MacBook Pro with a 2.9 GHz CPU (Turbo Boosted to 3.4 GHz) and 8GB RAM. The C++ implementation in BIBREF17 is the fastest. The implementation in BIBREF8 makes use of un-hashed feature representations which causes it to be slower than the proposed sklearn implementation. The execution performance of BIBREF23 might improve by a factor of five to ten when executed on a GPU."
],
"extractive_spans": [
"average classification accuracy",
"execution performance"
],
"free_form_answer": "",
"highlighted_evidence": [
"The average classification accuracy results are summarised in Table TABREF9. The accuracies reported are for classifying a piece of text by its specific language label.",
"The execution performance of some of the LID implementations are shown in Table TABREF10."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"30a61eebde7880653ff47562301753b54a375d75",
"36c33be6b782b952685459b871759039ec86e5c1"
],
"answer": [
{
"evidence": [
"Table TABREF2 shows the percentages of first language speakers for each of the official languages of South Africa. These are four conjunctively written Nguni languages (zul, xho, nbl, ssw), Afrikaans (afr) and English (eng), three disjunctively written Sotho languages (nso, sot, tsn), as well as tshiVenda (ven) and Xitsonga (tso). The Nguni languages are similar to each other and harder to distinguish. The same is true of the Sotho languages."
],
"extractive_spans": [
"Nguni languages (zul, xho, nbl, ssw)",
"Sotho languages (nso, sot, tsn)"
],
"free_form_answer": "",
"highlighted_evidence": [
"These are four conjunctively written Nguni languages (zul, xho, nbl, ssw), Afrikaans (afr) and English (eng), three disjunctively written Sotho languages (nso, sot, tsn), as well as tshiVenda (ven) and Xitsonga (tso). The Nguni languages are similar to each other and harder to distinguish. The same is true of the Sotho languages.",
"Similar languages are to each other are:\n- Nguni languages: zul, xho, nbl, ssw\n- Sotho languages: nso, sot, tsn"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Table TABREF2 shows the percentages of first language speakers for each of the official languages of South Africa. These are four conjunctively written Nguni languages (zul, xho, nbl, ssw), Afrikaans (afr) and English (eng), three disjunctively written Sotho languages (nso, sot, tsn), as well as tshiVenda (ven) and Xitsonga (tso). The Nguni languages are similar to each other and harder to distinguish. The same is true of the Sotho languages."
],
"extractive_spans": [
"The Nguni languages are similar to each other",
"The same is true of the Sotho languages"
],
"free_form_answer": "",
"highlighted_evidence": [
"Table TABREF2 shows the percentages of first language speakers for each of the official languages of South Africa. These are four conjunctively written Nguni languages (zul, xho, nbl, ssw), Afrikaans (afr) and English (eng), three disjunctively written Sotho languages (nso, sot, tsn), as well as tshiVenda (ven) and Xitsonga (tso). The Nguni languages are similar to each other and harder to distinguish. The same is true of the Sotho languages."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"916a3872317bb3ee04090bc633b5996a0c06f7a7"
],
"answer": [
{
"evidence": [
"The focus of this section is on recently published datasets and LID research applicable to the South African context. An in depth survey of algorithms, features, datasets, shared tasks and evaluation methods may be found in BIBREF0.",
"The datasets for the DSL 2015 & DSL 2017 shared tasks BIBREF1 are often used in LID benchmarks and also available on Kaggle . The DSL datasets, like other LID datasets, consists of text sentences labelled by language. The 2017 dataset, for example, contains 14 languages over 6 language groups with 18000 training samples and 1000 testing samples per language.",
"The recently published JW300 parallel corpus BIBREF2 covers over 300 languages with around 100 thousand parallel sentences per language pair on average. In South Africa, a multilingual corpus of academic texts produced by university students with different mother tongues is being developed BIBREF3. The WiLI-2018 benchmark dataset BIBREF4 for monolingual written natural language identification includes around 1000 paragraphs of 235 languages. A possibly useful link can also be made BIBREF5 between Native Language Identification (NLI) (determining the native language of the author of a text) and Language Variety Identification (LVI) (classification of different varieties of a single language) which opens up more datasets. The Leipzig Corpora Collection BIBREF6, the Universal Declaration of Human Rights and Tatoeba are also often used sources of data.",
"The NCHLT text corpora BIBREF7 is likely a good starting point for a shared LID task dataset for the South African languages BIBREF8. The NCHLT text corpora contains enough data to have 3500 training samples and 600 testing samples of 300+ character sentences per language. Researchers have recently started applying existing algorithms for tasks like neural machine translation in earnest to such South African language datasets BIBREF9."
],
"extractive_spans": [
"DSL 2015",
"DSL 2017",
"JW300 parallel corpus ",
"NCHLT text corpora"
],
"free_form_answer": "",
"highlighted_evidence": [
"The focus of this section is on recently published datasets and LID research applicable to the South African context. An in depth survey of algorithms, features, datasets, shared tasks and evaluation methods may be found in BIBREF0.\n\nThe datasets for the DSL 2015 & DSL 2017 shared tasks BIBREF1 are often used in LID benchmarks and also available on Kaggle .",
"The recently published JW300 parallel corpus BIBREF2 covers over 300 languages with around 100 thousand parallel sentences per language pair on average.",
"The WiLI-2018 benchmark dataset BIBREF4 for monolingual written natural language identification includes around 1000 paragraphs of 235 languages.",
"The NCHLT text corpora BIBREF7 is likely a good starting point for a shared LID task dataset for the South African languages BIBREF8."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"0c160bb4ef91ca885e88256f4ef1c78438757418",
"a3e460e32adf0308aa3f317eb2a256609b2ccd46"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 2: LID Accuracy Results. The models we executed ourselves are marked with *. The results that are not available from our own tests or the literature are indicated with ’—’."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: LID Accuracy Results. The models we executed ourselves are marked with *. The results that are not available from our own tests or the literature are indicated with ’—’."
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"The average classification accuracy results are summarised in Table TABREF9. The accuracies reported are for classifying a piece of text by its specific language label. Classifying text only by language group or family is a much easier task as reported in BIBREF8.",
"FLOAT SELECTED: Table 2: LID Accuracy Results. The models we executed ourselves are marked with *. The results that are not available from our own tests or the literature are indicated with ’—’."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"The average classification accuracy results are summarised in Table TABREF9.",
"FLOAT SELECTED: Table 2: LID Accuracy Results. The models we executed ourselves are marked with *. The results that are not available from our own tests or the literature are indicated with ’—’."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"d6512f006a260dd1f37f861425866ab24928dec2"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"49e1a8fc8beee0ebd41f5e02abad2468b7740d03",
"548caee8254473df493659c215beb51e1386794a"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 2: LID Accuracy Results. The models we executed ourselves are marked with *. The results that are not available from our own tests or the literature are indicated with ’—’."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: LID Accuracy Results. The models we executed ourselves are marked with *. The results that are not available from our own tests or the literature are indicated with ’—’."
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"FLOAT SELECTED: Table 2: LID Accuracy Results. The models we executed ourselves are marked with *. The results that are not available from our own tests or the literature are indicated with ’—’.",
"The average classification accuracy results are summarised in Table TABREF9. The accuracies reported are for classifying a piece of text by its specific language label. Classifying text only by language group or family is a much easier task as reported in BIBREF8."
],
"extractive_spans": [],
"free_form_answer": "From all reported results proposed method (NB+Lex) shows best accuracy on all 3 datasets - some models are not evaluated and not available in literature.",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: LID Accuracy Results. The models we executed ourselves are marked with *. The results that are not available from our own tests or the literature are indicated with ’—’.",
"The average classification accuracy results are summarised in Table TABREF9. The accuracies reported are for classifying a piece of text by its specific language label."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"two",
"two",
"two",
"two",
"two",
"two",
"two",
"two",
"two"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no",
"no",
"no",
"no",
"no"
],
"question": [
"What is the approach of previous work?",
"Is the lexicon the same for all languages?",
"How do they obtain the lexicon?",
"What evaluation metric is used?",
"Which languages are similar to each other?",
"Which datasets are employed for South African languages LID?",
"Does the paper report the performance of a baseline model on South African languages LID?",
"What are the languages represented in the DSL datasets? ",
"Does the algorithm improve on the state-of-the-art methods?"
],
"question_id": [
"012b8a89aea27485797373adbcda32f16f9d7b54",
"c598028815066089cc1e131b96d6966d2610467a",
"ca4daafdc23f4e23d933ebabe682e1fe0d4b95ed",
"0ab3df10f0b7203e859e9b62ffa7d6d79ffbbe50",
"92dfacbbfa732ecea006e251be415a6f89fb4ec6",
"c8541ff10c4e0c8e9eb37d9d7ea408d1914019a9",
"307e8ab37b67202fe22aedd9a98d9d06aaa169c5",
"6415f38a06c2f99e8627e8ba6251aa4b364ade2d",
"e5c8e9e54e77960c8c26e8e238168a603fcdfcc6"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c"
],
"search_query": [
"",
"",
"",
"",
"",
"language identification",
"language identification",
"language identification",
"language identification"
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar",
"familiar",
"research",
"research",
"research",
"research"
]
} | {
"caption": [
"Table 1: Percentage of South Africans by First Language",
"Table 2: LID Accuracy Results. The models we executed ourselves are marked with *. The results that are not available from our own tests or the literature are indicated with ’—’.",
"Table 3: LID requests/sec. on NCHLT dataset. The models we executed ourselves are marked with *. For the other models the results that are not available in the literature are indicated with ’—’."
],
"file": [
"2-Table1-1.png",
"4-Table2-1.png",
"4-Table3-1.png"
]
} | [
"Does the algorithm improve on the state-of-the-art methods?"
] | [
[
"1911.07555-Results and Analysis-0",
"1911.07555-4-Table2-1.png"
]
] | [
"From all reported results proposed method (NB+Lex) shows best accuracy on all 3 datasets - some models are not evaluated and not available in literature."
] | 139 |
1503.00841 | Robustly Leveraging Prior Knowledge in Text Classification | Prior knowledge has been shown very useful to address many natural language processing tasks. Many approaches have been proposed to formalise a variety of knowledge, however, whether the proposed approach is robust or sensitive to the knowledge supplied to the model has rarely been discussed. In this paper, we propose three regularization terms on top of generalized expectation criteria, and conduct extensive experiments to justify the robustness of the proposed methods. Experimental results demonstrate that our proposed methods obtain remarkable improvements and are much more robust than baselines. | {
"paragraphs": [
[
"We posses a wealth of prior knowledge about many natural language processing tasks. For example, in text categorization, we know that words such as NBA, player, and basketball are strong indicators of the sports category BIBREF0 , and words like terrible, boring, and messing indicate a negative polarity while words like perfect, exciting, and moving suggest a positive polarity in sentiment classification.",
"A key problem arisen here, is how to leverage such knowledge to guide the learning process, an interesting problem for both NLP and machine learning communities. Previous studies addressing the problem fall into several lines. First, to leverage prior knowledge to label data BIBREF1 , BIBREF2 . Second, to encode prior knowledge with a prior on parameters, which can be commonly seen in many Bayesian approaches BIBREF3 , BIBREF4 . Third, to formalise prior knowledge with additional variables and dependencies BIBREF5 . Last, to use prior knowledge to control the distributions over latent output variables BIBREF6 , BIBREF7 , BIBREF8 , which makes the output variables easily interpretable.",
"However, a crucial problem, which has rarely been addressed, is the bias in the prior knowledge that we supply to the learning model. Would the model be robust or sensitive to the prior knowledge? Or, which kind of knowledge is appropriate for the task? Let's see an example: we may be a baseball fan but unfamiliar with hockey so that we can provide a few number of feature words of baseball, but much less of hockey for a baseball-hockey classification task. Such prior knowledge may mislead the model with heavy bias to baseball. If the model cannot handle this situation appropriately, the performance may be undesirable.",
"In this paper, we investigate into the problem in the framework of Generalized Expectation Criteria BIBREF7 . The study aims to reveal the factors of reducing the sensibility of the prior knowledge and therefore to make the model more robust and practical. To this end, we introduce auxiliary regularization terms in which our prior knowledge is formalized as distribution over output variables. Recall the example just mentioned, though we do not have enough knowledge to provide features for class hockey, it is easy for us to provide some neutral words, namely words that are not strong indicators of any class, like player here. As one of the factors revealed in this paper, supplying neutral feature words can boost the performance remarkably, making the model more robust.",
"More attractively, we do not need manual annotation to label these neutral feature words in our proposed approach.",
"More specifically, we explore three regularization terms to address the problem: (1) a regularization term associated with neutral features; (2) the maximum entropy of class distribution regularization term; and (3) the KL divergence between reference and predicted class distribution. For the first manner, we simply use the most common features as neutral features and assume the neutral features are distributed uniformly over class labels. For the second and third one, we assume we have some knowledge about the class distribution which will be detailed soon later.",
"To summarize, the main contributions of this work are as follows:",
"The rest of the paper is structured as follows: In Section 2, we briefly describe the generalized expectation criteria and present the proposed regularization terms. In Section 3, we conduct extensive experiments to justify the proposed methods. We survey related work in Section 4, and summarize our work in Section 5."
],
[
"We address the robustness problem on top of GE-FL BIBREF0 , a GE method which leverages labeled features as prior knowledge. A labeled feature is a strong indicator of a specific class and is manually provided to the classifier. For example, words like amazing, exciting can be labeled features for class positive in sentiment classification."
],
[
"Generalized expectation (GE) criteria BIBREF7 provides us a natural way to directly constrain the model in the preferred direction. For example, when we know the proportion of each class of the dataset in a classification task, we can guide the model to predict out a pre-specified class distribution.",
"Formally, in a parameter estimation objective function, a GE term expresses preferences on the value of some constraint functions about the model's expectation. Given a constraint function $G({\\rm x}, y)$ , a conditional model distribution $p_\\theta (y|\\rm x)$ , an empirical distribution $\\tilde{p}({\\rm x})$ over input samples and a score function $S$ , a GE term can be expressed as follows: ",
"$$S(E_{\\tilde{p}({\\rm x})}[E_{p_\\theta (y|{\\rm x})}[G({\\rm x}, y)]])$$ (Eq. 4) "
],
[
"Druck et al. ge-fl proposed GE-FL to learn from labeled features using generalized expectation criteria. When given a set of labeled features $K$ , the reference distribution over classes of these features is denoted by $\\hat{p}(y| x_k), k \\in K$ . GE-FL introduces the divergence between this reference distribution and the model predicted distribution $p_\\theta (y | x_k)$ , as a term of the objective function: ",
"$$\\mathcal {O} = \\sum _{k \\in K} KL(\\hat{p}(y|x_k) || p_\\theta (y | x_k)) + \\sum _{y,i} \\frac{\\theta _{yi}^2}{2 \\sigma ^2}$$ (Eq. 6) ",
"where $\\theta _{yi}$ is the model parameter which indicates the importance of word $i$ to class $y$ . The predicted distribution $p_\\theta (y | x_k)$ can be expressed as follows: $\np_\\theta (y | x_k) = \\frac{1}{C_k} \\sum _{\\rm x} p_\\theta (y|{\\rm x})I(x_k)\n$ ",
"in which $I(x_k)$ is 1 if feature $k$ occurs in instance ${\\rm x}$ and 0 otherwise, $C_k = \\sum _{\\rm x} I(x_k)$ is the number of instances with a non-zero value of feature $k$ , and $p_\\theta (y|{\\rm x})$ takes a softmax form as follows: $\np_\\theta (y|{\\rm x}) = \\frac{1}{Z(\\rm x)}\\exp (\\sum _i \\theta _{yi}x_i).\n$ ",
"To solve the optimization problem, L-BFGS can be used for parameter estimation.",
"In the framework of GE, this term can be obtained by setting the constraint function $G({\\rm x}, y) = \\frac{1}{C_k} \\vec{I} (y)I(x_k)$ , where $\\vec{I}(y)$ is an indicator vector with 1 at the index corresponding to label $y$ and 0 elsewhere."
],
[
"GE-FL reduces the heavy load of instance annotation and performs well when we provide prior knowledge with no bias. In our experiments, we observe that comparable numbers of labeled features for each class have to be supplied. But as mentioned before, it is often the case that we are not able to provide enough knowledge for some of the classes. For the baseball-hockey classification task, as shown before, GE-FL will predict most of the instances as baseball. In this section, we will show three terms to make the model more robust.",
"Neutral features are features that are not informative indicator of any classes, for instance, word player to the baseball-hockey classification task. Such features are usually frequent words across all categories. When we set the preference distribution of the neutral features to be uniform distributed, these neutral features will prevent the model from biasing to the class that has a dominate number of labeled features.",
"Formally, given a set of neutral features $K^{^{\\prime }}$ , the uniform distribution is $\\hat{p}_u(y|x_k) = \\frac{1}{|C|}, k \\in K^{^{\\prime }}$ , where $|C|$ is the number of classes. The objective function with the new term becomes ",
"$$\\mathcal {O}_{NE} = \\mathcal {O} + \\sum _{k \\in K^{^{\\prime }}} KL(\\hat{p}_u(y|x_k) || p_\\theta (y | x_k)).$$ (Eq. 9) ",
"Note that we do not need manual annotation to provide neutral features. One simple way is to take the most common features as neutral features. Experimental results show that this strategy works successfully.",
"Another way to prevent the model from drifting from the desired direction is to constrain the predicted class distribution on unlabeled data. When lacking knowledge about the class distribution of the data, one feasible way is to take maximum entropy principle, as below: ",
"$$\\mathcal {O}_{ME} = \\mathcal {O} + \\lambda \\sum _{y} p(y) \\log p(y)$$ (Eq. 11) ",
"where $p(y)$ is the predicted class distribution, given by $\np(y) = \\frac{1}{|X|} \\sum _{\\rm x} p_\\theta (y | \\rm x).\n$ To control the influence of this term on the overall objective function, we can tune $\\lambda $ according to the difference in the number of labeled features of each class. In this paper, we simply set $\\lambda $ to be proportional to the total number of labeled features, say $\\lambda = \\beta |K|$ .",
"This maximum entropy term can be derived by setting the constraint function to $G({\\rm x}, y) = \\vec{I}(y)$ . Therefore, $E_{p_\\theta (y|{\\rm x})}[G({\\rm x}, y)]$ is just the model distribution $p_\\theta (y|{\\rm x})$ and its expectation with the empirical distribution $\\tilde{p}(\\rm x)$ is simply the average over input samples, namely $p(y)$ . When $S$ takes the maximum entropy form, we can derive the objective function as above.",
"Sometimes, we have already had much knowledge about the corpus, and can estimate the class distribution roughly without labeling instances. Therefore, we introduce the KL divergence between the predicted and reference class distributions into the objective function.",
"Given the preference class distribution $\\hat{p}(y)$ , we modify the objective function as follows: ",
"$$\\mathcal {O}_{KL} &= \\mathcal {O} + \\lambda KL(\\hat{p}(y) || p(y))$$ (Eq. 13) ",
"Similarly, we set $\\lambda = \\beta |K|$ .",
"This divergence term can be derived by setting the constraint function to $G({\\rm x}, y) = \\vec{I}(y)$ and setting the score function to $S(\\hat{p}, p) = \\sum _i \\hat{p}_i \\log \\frac{\\hat{p}_i}{p_i}$ , where $p$ and $\\hat{p}$ are distributions. Note that this regularization term involves the reference class distribution which will be discussed later."
],
[
"In this section, we first justify the approach when there exists unbalance in the number of labeled features or in class distribution. Then, to test the influence of $\\lambda $ , we conduct some experiments with the method which incorporates the KL divergence of class distribution. Last, we evaluate our approaches in 9 commonly used text classification datasets. We set $\\lambda = 5|K|$ by default in all experiments unless there is explicit declaration. The baseline we choose here is GE-FL BIBREF0 , a method based on generalization expectation criteria."
],
[
"We evaluate our methods on several commonly used datasets whose themes range from sentiment, web-page, science to medical and healthcare. We use bag-of-words feature and remove stopwords in the preprocess stage. Though we have labels of all documents, we do not use them during the learning process, instead, we use the label of features.",
"The movie dataset, in which the task is to classify the movie reviews as positive or negtive, is used for testing the proposed approaches with unbalanced labeled features, unbalanced datasets or different $\\lambda $ parameters. All unbalanced datasets are constructed based on the movie dataset by randomly removing documents of the positive class. For each experiment, we conduct 10-fold cross validation.",
"As described in BIBREF0 , there are two ways to obtain labeled features. The first way is to use information gain. We first calculate the mutual information of all features according to the labels of the documents and select the top 20 as labeled features for each class as a feature pool. Note that using information gain requires the document label, but this is only to simulate how we human provide prior knowledge to the model. The second way is to use LDA BIBREF9 to select features. We use the same selection process as BIBREF0 , where they first train a LDA on the dataset, and then select the most probable features of each topic (sorted by $P(w_i|t_j)$ , the probability of word $w_i$ given topic $t_j$ ).",
"Similar to BIBREF10 , BIBREF0 , we estimate the reference distribution of the labeled features using a heuristic strategy. If there are $|C|$ classes in total, and $n$ classes are associated with a feature $k$ , the probability that feature $k$ is related with any one of the $n$ classes is $\\frac{0.9}{n}$ and with any other class is $\\frac{0.1}{|C| - n}$ .",
"Neutral features are the most frequent words after removing stop words, and their reference distributions are uniformly distributed. We use the top 10 frequent words as neutral features in all experiments."
],
[
"In this section, we evaluate our approach when there is unbalanced knowledge on the categories to be classified. The labeled features are obtained through information gain. Two settings are chosen:",
"(a) We randomly select $t \\in [1, 20]$ features from the feature pool for one class, and only one feature for the other. The original balanced movie dataset is used (positive:negative=1:1).",
"(b) Similar to (a), but the dataset is unbalanced, obtained by randomly removing 75% positive documents (positive:negative=1:4).",
"As shown in Figure 1 , Maximum entropy principle shows improvement only on the balanced case. An obvious reason is that maximum entropy only favors uniform distribution.",
"Incorporating Neutral features performs similarly to maximum entropy since we assume that neutral words are uniformly distributed. Its accuracy decreases slowly when the number of labeled features becomes larger ( $t>4$ ) (Figure 1 (a)), suggesting that the model gradually biases to the class with more labeled features, just like GE-FL.",
"Incorporating the KL divergence of class distribution performs much better than GE-FL on both balanced and unbalanced datasets. This shows that it is effective to control the unbalance in labeled features and in the dataset."
],
[
"We also compare with the baseline when the labeled features are balanced. Similar to the experiment above, the labeled features are obtained by information gain. Two settings are experimented with:",
"(a) We randomly select $t \\in [1, 20]$ features from the feature pool for each class, and conduct comparisons on the original balanced movie dataset (positive:negtive=1:1).",
"(b) Similar to (a), but the class distribution is unbalanced, by randomly removing 75% positive documents (positive:negative=1:4).",
"Results are shown in Figure 2 . When the dataset is balanced (Figure 2 (a)), there is little difference between GE-FL and our methods. The reason is that the proposed regularization terms provide no additional knowledge to the model and there is no bias in the labeled features. On the unbalanced dataset (Figure 2 (b)), incorporating KL divergence is much better than GE-FL since we provide additional knowledge(the true class distribution), but maximum entropy and neutral features are much worse because forcing the model to approach the uniform distribution misleads it."
],
[
"Our methods are also evaluated on datasets with different unbalanced class distributions. We manually construct several movie datasets with class distributions of 1:2, 1:3, 1:4 by randomly removing 50%, 67%, 75% positive documents. The original balanced movie dataset is used as a control group. We test with both balanced and unbalanced labeled features. For the balanced case, we randomly select 10 features from the feature pool for each class, and for the unbalanced case, we select 10 features for one class, and 1 feature for the other. Results are shown in Figure 3 .",
"Figure 3 (a) shows that when the dataset and the labeled features are both balanced, there is little difference between our methods and GE-FL(also see Figure 2 (a)). But when the class distribution becomes more unbalanced, the difference becomes more remarkable. Performance of neutral features and maximum entropy decrease significantly but incorporating KL divergence increases remarkably. This suggests if we have more accurate knowledge about class distribution, KL divergence can guide the model to the right direction.",
"Figure 3 (b) shows that when the labeled features are unbalanced, our methods significantly outperforms GE-FL. Incorporating KL divergence is robust enough to control unbalance both in the dataset and in labeled features while the other three methods are not so competitive."
],
[
"We present the influence of $\\lambda $ on the method that incorporates KL divergence in this section. Since we simply set $\\lambda = \\beta |K|$ , we just tune $\\beta $ here. Note that when $\\beta = 0$ , the newly introduced regularization term is disappeared, and thus the model is actually GE-FL. Again, we test the method with different $\\lambda $ in two settings:",
"(a) We randomly select $t \\in [1, 20]$ features from the feature pool for one class, and only one feature for the other class. The original balanced movie dataset is used (positive:negative=1:1).",
"(b) Similar to (a), but the dataset is unbalanced, obtained by randomly removing 75% positive documents (positive:negative=1:4).",
"Results are shown in Figure 4 . As expected, $\\lambda $ reflects how strong the regularization is. The model tends to be closer to our preferences with the increasing of $\\lambda $ on both cases."
],
[
"We compare our methods with GE-FL on all the 9 datasets in this section. Instead of using features obtained by information gain, we use LDA to select labeled features. Unlike information gain, LDA does not employ any instance labels to find labeled features. In this setting, we can build classification models without any instance annotation, but just with labeled features.",
"Table 1 shows that our three methods significantly outperform GE-FL. Incorporating neutral features performs better than GE-FL on 7 of the 9 datasets, maximum entropy is better on 8 datasets, and KL divergence better on 7 datasets.",
"LDA selects out the most predictive features as labeled features without considering the balance among classes. GE-FL does not exert any control on such an issue, so the performance is severely suffered. Our methods introduce auxiliary regularization terms to control such a bias problem and thus promote the model significantly."
],
[
"There have been much work that incorporate prior knowledge into learning, and two related lines are surveyed here. One is to use prior knowledge to label unlabeled instances and then apply a standard learning algorithm. The other is to constrain the model directly with prior knowledge.",
"Liu et al.text manually labeled features which are highly predictive to unsupervised clustering assignments and use them to label unlabeled data. Chang et al.guiding proposed constraint driven learning. They first used constraints and the learned model to annotate unlabeled instances, and then updated the model with the newly labeled data. Daumé daume2008cross proposed a self training method in which several models are trained on the same dataset, and only unlabeled instances that satisfy the cross task knowledge constraints are used in the self training process.",
"MaCallum et al.gec proposed generalized expectation(GE) criteria which formalised the knowledge as constraint terms about the expectation of the model into the objective function.Graça et al.pr proposed posterior regularization(PR) framework which projects the model's posterior onto a set of distributions that satisfy the auxiliary constraints. Druck et al.ge-fl explored constraints of labeled features in the framework of GE by forcing the model's predicted feature distribution to approach the reference distribution. Andrzejewski et al.andrzejewski2011framework proposed a framework in which general domain knowledge can be easily incorporated into LDA. Altendorf et al.altendorf2012learning explored monotonicity constraints to improve the accuracy while learning from sparse data. Chen et al.chen2013leveraging tried to learn comprehensible topic models by leveraging multi-domain knowledge.",
"Mann and McCallum simple,generalized incorporated not only labeled features but also other knowledge like class distribution into the objective function of GE-FL. But they discussed only from the semi-supervised perspective and did not investigate into the robustness problem, unlike what we addressed in this paper.",
"There are also some active learning methods trying to use prior knowledge. Raghavan et al.feedback proposed to use feedback on instances and features interlacedly, and demonstrated that feedback on features boosts the model much. Druck et al.active proposed an active learning method which solicits labels on features rather than on instances and then used GE-FL to train the model."
],
[
"This paper investigates into the problem of how to leverage prior knowledge robustly in learning models. We propose three regularization terms on top of generalized expectation criteria. As demonstrated by the experimental results, the performance can be considerably improved when taking into account these factors. Comparative results show that our proposed methods is more effective and works more robustly against baselines. To the best of our knowledge, this is the first work to address the robustness problem of leveraging knowledge, and may inspire other research.",
"We then present more detailed discussions about the three regularization methods. Incorporating neutral features is the simplest way of regularization, which doesn't require any modification of GE-FL but just finding out some common features. But as Figure 1 (a) shows, only using neutral features are not strong enough to handle extremely unbalanced labeled features.",
"The maximum entropy regularization term shows the strong ability of controlling unbalance.",
"This method doesn't need any extra knowledge, and is thus suitable when we know nothing about the corpus. But this method assumes that the categories are uniformly distributed, which may not be the case in practice, and it will have a degraded performance if the assumption is violated (see Figure 1 (b), Figure 2 (b), Figure 3 (a)).",
"The KL divergence performs much better on unbalanced corpora than other methods. The reason is that KL divergence utilizes the reference class distribution and doesn't make any assumptions. The fact suggests that additional knowledge does benefit the model.",
"However, the KL divergence term requires providing the true class distribution. Sometimes, we may have the exact knowledge about the true distribution, but sometimes we may not. Fortunately, the model is insensitive to the true distribution and therefore a rough estimation of the true distribution is sufficient. In our experiments, when the true class distribution is 1:2, where the reference class distribution is set to 1:1.5/1:2/1:2.5, the accuracy is 0.755/0.756/0.760 respectively. This provides us the possibility to perform simple computing on the corpus to obtain the distribution in reality. Or, we can set the distribution roughly with domain expertise."
]
],
"section_name": [
"Introduction",
"Method",
"Generalized Expectation Criteria",
"Learning from Labeled Features",
"Regularization Terms",
"Experiments",
"Data Preparation",
"With Unbalanced Labeled Features",
"With Balanced Labeled Features",
"With Unbalanced Class Distributions",
"The Influence of λ\\lambda ",
"Using LDA Selected Features",
"Related Work",
"Conclusion and Discussions"
]
} | {
"answers": [
{
"annotation_id": [
"72c7d8f0cf398f03592d19718944d0d7eb6c948a",
"bd637c246a22f514c938bdfc3eb2509c4d68364e"
],
"answer": [
{
"evidence": [
"We address the robustness problem on top of GE-FL BIBREF0 , a GE method which leverages labeled features as prior knowledge. A labeled feature is a strong indicator of a specific class and is manually provided to the classifier. For example, words like amazing, exciting can be labeled features for class positive in sentiment classification."
],
"extractive_spans": [
"labeled features"
],
"free_form_answer": "",
"highlighted_evidence": [
"We address the robustness problem on top of GE-FL BIBREF0 , a GE method which leverages labeled features as prior knowledge."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We address the robustness problem on top of GE-FL BIBREF0 , a GE method which leverages labeled features as prior knowledge. A labeled feature is a strong indicator of a specific class and is manually provided to the classifier. For example, words like amazing, exciting can be labeled features for class positive in sentiment classification.",
"As described in BIBREF0 , there are two ways to obtain labeled features. The first way is to use information gain. We first calculate the mutual information of all features according to the labels of the documents and select the top 20 as labeled features for each class as a feature pool. Note that using information gain requires the document label, but this is only to simulate how we human provide prior knowledge to the model. The second way is to use LDA BIBREF9 to select features. We use the same selection process as BIBREF0 , where they first train a LDA on the dataset, and then select the most probable features of each topic (sorted by $P(w_i|t_j)$ , the probability of word $w_i$ given topic $t_j$ ).",
"We evaluate our methods on several commonly used datasets whose themes range from sentiment, web-page, science to medical and healthcare. We use bag-of-words feature and remove stopwords in the preprocess stage. Though we have labels of all documents, we do not use them during the learning process, instead, we use the label of features."
],
"extractive_spans": [],
"free_form_answer": "labelled features, which are words whose presence strongly indicates a specific class or topic",
"highlighted_evidence": [
"We address the robustness problem on top of GE-FL BIBREF0 , a GE method which leverages labeled features as prior knowledge. A labeled feature is a strong indicator of a specific class and is manually provided to the classifier.",
"As described in BIBREF0 , there are two ways to obtain labeled features.",
"We use bag-of-words feature and remove stopwords in the preprocess stage. ",
"We first calculate the mutual information of all features according to the labels of the documents and select the top 20 as labeled features for each class as a feature pool.",
"The second way is to use LDA BIBREF9 to select features. We use the same selection process as BIBREF0 , where they first train a LDA on the dataset, and then select the most probable features of each topic (sorted by $P(w_i|t_j)$ , the probability of word $w_i$ given topic $t_j$ )."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe",
"043654eefd60242ac8da08ddc1d4b8d73f86f653"
]
},
{
"annotation_id": [
"ea5858495aad50966177d146f5e71733ec1def94",
"ec189863dfb5bfb83b9a2ff6f8d1e91a524931ce"
],
"answer": [
{
"evidence": [
"More specifically, we explore three regularization terms to address the problem: (1) a regularization term associated with neutral features; (2) the maximum entropy of class distribution regularization term; and (3) the KL divergence between reference and predicted class distribution. For the first manner, we simply use the most common features as neutral features and assume the neutral features are distributed uniformly over class labels. For the second and third one, we assume we have some knowledge about the class distribution which will be detailed soon later."
],
"extractive_spans": [
"a regularization term associated with neutral features",
"the maximum entropy of class distribution regularization term",
"the KL divergence between reference and predicted class distribution"
],
"free_form_answer": "",
"highlighted_evidence": [
"More specifically, we explore three regularization terms to address the problem: (1) a regularization term associated with neutral features; (2) the maximum entropy of class distribution regularization term; and (3) the KL divergence between reference and predicted class distribution."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"More specifically, we explore three regularization terms to address the problem: (1) a regularization term associated with neutral features; (2) the maximum entropy of class distribution regularization term; and (3) the KL divergence between reference and predicted class distribution. For the first manner, we simply use the most common features as neutral features and assume the neutral features are distributed uniformly over class labels. For the second and third one, we assume we have some knowledge about the class distribution which will be detailed soon later."
],
"extractive_spans": [
"a regularization term associated with neutral features",
" the maximum entropy of class distribution",
"KL divergence between reference and predicted class distribution"
],
"free_form_answer": "",
"highlighted_evidence": [
"More specifically, we explore three regularization terms to address the problem: (1) a regularization term associated with neutral features; (2) the maximum entropy of class distribution regularization term; and (3) the KL divergence between reference and predicted class distribution."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe",
"043654eefd60242ac8da08ddc1d4b8d73f86f653"
]
},
{
"annotation_id": [
"2c0757cb906ecefa36648ada497fcc9bda53645f"
],
"answer": [
{
"evidence": [
"In this section, we first justify the approach when there exists unbalance in the number of labeled features or in class distribution. Then, to test the influence of $\\lambda $ , we conduct some experiments with the method which incorporates the KL divergence of class distribution. Last, we evaluate our approaches in 9 commonly used text classification datasets. We set $\\lambda = 5|K|$ by default in all experiments unless there is explicit declaration. The baseline we choose here is GE-FL BIBREF0 , a method based on generalization expectation criteria.",
"We evaluate our methods on several commonly used datasets whose themes range from sentiment, web-page, science to medical and healthcare. We use bag-of-words feature and remove stopwords in the preprocess stage. Though we have labels of all documents, we do not use them during the learning process, instead, we use the label of features."
],
"extractive_spans": [],
"free_form_answer": "text classification for themes including sentiment, web-page, science, medical and healthcare",
"highlighted_evidence": [
" Last, we evaluate our approaches in 9 commonly used text classification datasets.",
"We evaluate our methods on several commonly used datasets whose themes range from sentiment, web-page, science to medical and healthcare."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"043654eefd60242ac8da08ddc1d4b8d73f86f653"
]
},
{
"annotation_id": [
"052bc9840f6733421ea17a3b28753bbec7a871d2",
"95ca58e8d9c9f3445447fc3bcfbd7d902489ddc8"
],
"answer": [
{
"evidence": [
"GE-FL reduces the heavy load of instance annotation and performs well when we provide prior knowledge with no bias. In our experiments, we observe that comparable numbers of labeled features for each class have to be supplied. But as mentioned before, it is often the case that we are not able to provide enough knowledge for some of the classes. For the baseball-hockey classification task, as shown before, GE-FL will predict most of the instances as baseball. In this section, we will show three terms to make the model more robust.",
"(a) We randomly select $t \\in [1, 20]$ features from the feature pool for one class, and only one feature for the other. The original balanced movie dataset is used (positive:negative=1:1).",
"Our methods are also evaluated on datasets with different unbalanced class distributions. We manually construct several movie datasets with class distributions of 1:2, 1:3, 1:4 by randomly removing 50%, 67%, 75% positive documents. The original balanced movie dataset is used as a control group. We test with both balanced and unbalanced labeled features. For the balanced case, we randomly select 10 features from the feature pool for each class, and for the unbalanced case, we select 10 features for one class, and 1 feature for the other. Results are shown in Figure 3 .",
"Figure 3 (b) shows that when the labeled features are unbalanced, our methods significantly outperforms GE-FL. Incorporating KL divergence is robust enough to control unbalance both in the dataset and in labeled features while the other three methods are not so competitive."
],
"extractive_spans": [],
"free_form_answer": "ability to accurately classify texts even when the amount of prior knowledge for different classes is unbalanced, and when the class distribution of the dataset is unbalanced",
"highlighted_evidence": [
"GE-FL reduces the heavy load of instance annotation and performs well when we provide prior knowledge with no bias. In our experiments, we observe that comparable numbers of labeled features for each class have to be supplied.",
"We randomly select $t \\in [1, 20]$ features from the feature pool for one class, and only one feature for the other.",
"Our methods are also evaluated on datasets with different unbalanced class distributions. We manually construct several movie datasets with class distributions of 1:2, 1:3, 1:4 by randomly removing 50%, 67%, 75% positive documents.",
"Incorporating KL divergence is robust enough to control unbalance both in the dataset and in labeled features while the other three methods are not so competitive."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"However, a crucial problem, which has rarely been addressed, is the bias in the prior knowledge that we supply to the learning model. Would the model be robust or sensitive to the prior knowledge? Or, which kind of knowledge is appropriate for the task? Let's see an example: we may be a baseball fan but unfamiliar with hockey so that we can provide a few number of feature words of baseball, but much less of hockey for a baseball-hockey classification task. Such prior knowledge may mislead the model with heavy bias to baseball. If the model cannot handle this situation appropriately, the performance may be undesirable.",
"In this paper, we investigate into the problem in the framework of Generalized Expectation Criteria BIBREF7 . The study aims to reveal the factors of reducing the sensibility of the prior knowledge and therefore to make the model more robust and practical. To this end, we introduce auxiliary regularization terms in which our prior knowledge is formalized as distribution over output variables. Recall the example just mentioned, though we do not have enough knowledge to provide features for class hockey, it is easy for us to provide some neutral words, namely words that are not strong indicators of any class, like player here. As one of the factors revealed in this paper, supplying neutral feature words can boost the performance remarkably, making the model more robust."
],
"extractive_spans": [],
"free_form_answer": "Low sensitivity to bias in prior knowledge",
"highlighted_evidence": [
"However, a crucial problem, which has rarely been addressed, is the bias in the prior knowledge that we supply to the learning model. Would the model be robust or sensitive to the prior knowledge?",
"The study aims to reveal the factors of reducing the sensibility of the prior knowledge and therefore to make the model more robust and practical."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"043654eefd60242ac8da08ddc1d4b8d73f86f653",
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"What background knowledge do they leverage?",
"What are the three regularization terms?",
"What NLP tasks do they consider?",
"How do they define robustness of a model?"
],
"question_id": [
"50be4a737dc0951b35d139f51075011095d77f2a",
"6becff2967fe7c5256fe0b00231765be5b9db9f1",
"76121e359dfe3f16c2a352bd35f28005f2a40da3",
"02428a8fec9788f6dc3a86b5d5f3aa679935678d"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [],
"file": []
} | [
"What background knowledge do they leverage?",
"What NLP tasks do they consider?",
"How do they define robustness of a model?"
] | [
[
"1503.00841-Data Preparation-0",
"1503.00841-Data Preparation-2",
"1503.00841-Method-0"
],
[
"1503.00841-Data Preparation-0",
"1503.00841-Experiments-0"
],
[
"1503.00841-With Unbalanced Labeled Features-1",
"1503.00841-Introduction-2",
"1503.00841-Introduction-3",
"1503.00841-With Unbalanced Class Distributions-2",
"1503.00841-With Unbalanced Class Distributions-0",
"1503.00841-Regularization Terms-0"
]
] | [
"labelled features, which are words whose presence strongly indicates a specific class or topic",
"text classification for themes including sentiment, web-page, science, medical and healthcare",
"Low sensitivity to bias in prior knowledge"
] | 140 |
1804.11346 | A Portuguese Native Language Identification Dataset | In this paper we present NLI-PT, the first Portuguese dataset compiled for Native Language Identification (NLI), the task of identifying an author's first language based on their second language writing. The dataset includes 1,868 student essays written by learners of European Portuguese, native speakers of the following L1s: Chinese, English, Spanish, German, Russian, French, Japanese, Italian, Dutch, Tetum, Arabic, Polish, Korean, Romanian, and Swedish. NLI-PT includes the original student text and four different types of annotation: POS, fine-grained POS, constituency parses, and dependency parses. NLI-PT can be used not only in NLI but also in research on several topics in the field of Second Language Acquisition and educational NLP. We discuss possible applications of this dataset and present the results obtained for the first lexical baseline system for Portuguese NLI. | {
"paragraphs": [
[
"Several learner corpora have been compiled for English, such as the International Corpus of Learner English BIBREF0 . The importance of such resources has been increasingly recognized across a variety of research areas, from Second Language Acquisition to Natural Language Processing. Recently, we have seen substantial growth in this area and new corpora for languages other than English have appeared. For Romance languages, there are a several corpora and resources for French, Spanish BIBREF1 , and Italian BIBREF2 .",
"Portuguese has also received attention in the compilation of learner corpora. There are two corpora compiled at the School of Arts and Humanities of the University of Lisbon: the corpus Recolha de dados de Aprendizagem do Português Língua Estrangeira (hereafter, Leiria corpus), with 470 texts and 70,500 tokens, and the Learner Corpus of Portuguese as Second/Foreign Language, COPLE2 BIBREF3 , with 1,058 texts and 201,921 tokens. The Corpus de Produções Escritas de Aprendentes de PL2, PEAPL2 compiled at the University of Coimbra, contains 516 texts and 119,381 tokens. Finally, the Corpus de Aquisição de L2, CAL2, compiled at the New University of Lisbon, contains 1,380 texts and 281,301 words, and it includes texts produced by adults and children, as well as a spoken subset.",
"The aforementioned Portuguese learner corpora contain very useful data for research, particularly for Native Language Identification (NLI), a task that has received much attention in recent years. NLI is the task of determining the native language (L1) of an author based on their second language (L2) linguistic productions BIBREF4 . NLI works by identifying language use patterns that are common to groups of speakers of the same native language. This process is underpinned by the presupposition that an author’s L1 disposes them towards certain language production patterns in their L2, as influenced by their mother tongue. A major motivation for NLI is studying second language acquisition. NLI models can enable analysis of inter-L1 linguistic differences, allowing us to study the language learning process and develop L1-specific pedagogical methods and materials.",
"However, there are limitations to using existing Portuguese data for NLI. An important issue is that the different corpora each contain data collected from different L1 backgrounds in varying amounts; they would need to be combined to have sufficient data for an NLI study. Another challenge concerns the annotations as only two of the corpora (PEAPL2 and COPLE2) are linguistically annotated, and this is limited to POS tags. The different data formats used by each corpus presents yet another challenge to their usage.",
"In this paper we present NLI-PT, a dataset collected for Portuguese NLI. The dataset is made freely available for research purposes. With the goal of unifying learner data collected from various sources, listed in Section \"Collection methodology\" , we applied a methodology which has been previously used for the compilation of language variety corpora BIBREF5 . The data was converted to a unified data format and uniformly annotated at different linguistic levels as described in Section \"Preprocessing and annotation of texts\" . To the best of our knowledge, NLI-PT is the only Portuguese dataset developed specifically for NLI, this will open avenues for research in this area."
],
[
"NLI has attracted a lot of attention in recent years. Due to the availability of suitable data, as discussed earlier, this attention has been particularly focused on English. The most notable examples are the two editions of the NLI shared task organized in 2013 BIBREF6 and 2017 BIBREF7 .",
"Even though most NLI research has been carried out on English data, an important research trend in recent years has been the application of NLI methods to other languages, as discussed in multilingual-nli. Recent NLI studies on languages other than English include Arabic BIBREF8 and Chinese BIBREF9 , BIBREF10 . To the best of our knowledge, no study has been published on Portuguese and the NLI-PT dataset opens new possibilities of research for Portuguese. In Section \"A Baseline for Portuguese NLI\" we present the first simple baseline results for this task.",
"Finally, as NLI-PT can be used in other applications besides NLI, it is important to point out that a number of studies have been published on educational NLP applications for Portuguese and on the compilation of learner language resources for Portuguese. Examples of such studies include grammatical error correction BIBREF11 , automated essay scoring BIBREF12 , academic word lists BIBREF13 , and the learner corpora presented in the previous section."
],
[
"The data was collected from three different learner corpora of Portuguese: (i) COPLE2; (ii) Leiria corpus, and (iii) PEAPL2 as presented in Table 1 .",
"The three corpora contain written productions from learners of Portuguese with different proficiency levels and native languages (L1s). In the dataset we included all the data in COPLE2 and sections of PEAPL2 and Leiria corpus.",
"The main variable we used for text selection was the presence of specific L1s. Since the three corpora consider different L1s, we decided to use the L1s present in the largest corpus, COPLE2, as the reference. Therefore, we included in the dataset texts corresponding to the following 15 L1s: Chinese, English, Spanish, German, Russian, French, Japanese, Italian, Dutch, Tetum, Arabic, Polish, Korean, Romanian, and Swedish. It was the case that some of the L1s present in COPLE2 were not documented in the other corpora. The number of texts from each L1 is presented in Table 2 .",
"Concerning the corpus design, there is some variability among the sources we used. Leiria corpus and PEAPL2 followed a similar approach for data collection and show a close design. They consider a close list of topics, called “stimulus”, which belong to three general areas: (i) the individual; (ii) the society; (iii) the environment. Those topics are presented to the students in order to produce a written text. As a whole, texts from PEAPL2 and Leiria represent 36 different stimuli or topics in the dataset. In COPLE2 corpus the written texts correspond to written exercises done during Portuguese lessons, or to official Portuguese proficiency tests. For this reason, the topics considered in COPLE2 corpus are different from the topics in Leiria and PEAPL2. The number of topics is also larger in COPLE2 corpus: 149 different topics. There is some overlap between the different topics considered in COPLE2, that is, some topics deal with the same subject. This overlap allowed us to reorganize COPLE2 topics in our dataset, reducing them to 112.",
"Due to the different distribution of topics in the source corpora, the 148 topics in the dataset are not represented uniformly. Three topics account for a 48.7% of the total texts and, on the other hand, a 72% of the topics are represented by 1-10 texts (Figure 1 ). This variability affects also text length. The longest text has 787 tokens and the shortest has only 16 tokens. Most texts, however, range roughly from 150 to 250 tokens. To better understand the distribution of texts in terms of word length we plot a histogram of all texts with their word length in bins of 10 (1-10 tokens, 11-20 tokens, 21-30 tokens and so on) (Figure 2 ).",
"The three corpora use the proficiency levels defined in the Common European Framework of Reference for Languages (CEFR), but they show differences in the number of levels they consider. There are five proficiency levels in COPLE2 and PEAPL2: A1, A2, B1, B2, and C1. But there are 3 levels in Leiria corpus: A, B, and C. The number of texts included from each proficiency level is presented in Table 4 ."
],
[
"As demonstrated earlier, these learner corpora use different formats. COPLE2 is mainly codified in XML, although it gives the possibility of getting the student version of the essay in TXT format. PEAPL2 and Leiria corpus are compiled in TXT format. In both corpora, the TXT files contain the student version with special annotations from the transcription. For the NLI experiments we were interested in a clean txt version of the students' text, together with versions annotated at different linguistics levels. Therefore, as a first step, we removed all the annotations corresponding to the transcription process in PEAPL2 and Leiria files. As a second step, we proceeded to the linguistic annotation of the texts using different NLP tools.",
"We annotated the dataset at two levels: Part of Speech (POS) and syntax. We performed the annotation with freely available tools for the Portuguese language. For POS we added a simple POS, that is, only type of word, and a fine-grained POS, which is the type of word plus its morphological features. We used the LX Parser BIBREF14 , for the simple POS and the Portuguese morphological module of Freeling BIBREF15 , for detailed POS. Concerning syntactic annotations, we included constituency and dependency annotations. For constituency parsing, we used the LX Parser, and for dependency, the DepPattern toolkit BIBREF16 ."
],
[
"NLI-PT was developed primarily for NLI, but it can be used for other research purposes ranging from second language acquisition to educational NLP applications. Here are a few examples of applications in which the dataset can be used:"
],
[
"To demonstrate the usefulness of the dataset we present the first lexical baseline for Portuguese NLI using a sub-set of NLI-PT. To the best of our knowledge, no study has been published on Portuguese NLI and our work fills this gap.",
"In this experiment we included the five L1s in NLI-PT which contain the largest number of texts in this sub-set and run a simple linear SVM BIBREF21 classifier using a bag of words model to identify the L1 of each text. The languages included in this experiment were Chinese (355 texts), English (236 texts), German (214 texts), Italian (216 texts), and Spanish (271 texts).",
"We evaluated the model using stratified 10-fold cross-validation, achieving 70% accuracy. An important limitation of this experiment is that it does not account for topic bias, an important issue in NLI BIBREF22 . This is due to the fact that NLI-PT is not balanced by topic and the model could be learning topic associations instead. In future work we would like to carry out using syntactic features such as function words, syntactic relations and POS annotation."
],
[
"This paper presented NLI-PT, the first Portuguese dataset compiled for NLI. NLI-PT contains 1,868 texts written by speakers of 15 L1s amounting to over 380,000 tokens.",
"As discussed in Section \"Applications\" , NLI-PT opens several avenues for future research. It can be used for different research purposes beyond NLI such as grammatical error correction and CALL. An experiment with the texts written by the speakers of five L1s: Chinese, English, German, Italian, and Spanish using a bag of words model achieved 70% accuracy. We are currently experimenting with different features taking advantage of the annotation available in NLI-PT thus reducing topic bias in classification.",
"In future work we would like to include more texts in the dataset following the same methodology and annotation."
],
[
"We want to thank the research teams that have made available the data we used in this work: Centro de Estudos de Linguística Geral e Aplicada at Universidade de Coimbra (specially Cristina Martins) and Centro de Linguística da Universidade de Lisboa (particularly Amália Mendes).",
"This work was partially supported by Fundação para a Ciência e a Tecnologia (postdoctoral research grant SFRH/BPD/109914/2015)."
]
],
"section_name": [
"Introduction",
"Related Work",
"Collection methodology",
"Preprocessing and annotation of texts",
"Applications",
"A Baseline for Portuguese NLI",
"Conclusion and Future Work",
"Acknowledgement"
]
} | {
"answers": [
{
"annotation_id": [
"0596e64af3a847e0c948a28419268e69c1b43535",
"e30f7ebcf2896a49b716ab1309a0c8b9b8269d1a"
],
"answer": [
{
"evidence": [
"We annotated the dataset at two levels: Part of Speech (POS) and syntax. We performed the annotation with freely available tools for the Portuguese language. For POS we added a simple POS, that is, only type of word, and a fine-grained POS, which is the type of word plus its morphological features. We used the LX Parser BIBREF14 , for the simple POS and the Portuguese morphological module of Freeling BIBREF15 , for detailed POS. Concerning syntactic annotations, we included constituency and dependency annotations. For constituency parsing, we used the LX Parser, and for dependency, the DepPattern toolkit BIBREF16 ."
],
"extractive_spans": [],
"free_form_answer": "Automatic",
"highlighted_evidence": [
" We performed the annotation with freely available tools for the Portuguese language."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We annotated the dataset at two levels: Part of Speech (POS) and syntax. We performed the annotation with freely available tools for the Portuguese language. For POS we added a simple POS, that is, only type of word, and a fine-grained POS, which is the type of word plus its morphological features. We used the LX Parser BIBREF14 , for the simple POS and the Portuguese morphological module of Freeling BIBREF15 , for detailed POS. Concerning syntactic annotations, we included constituency and dependency annotations. For constituency parsing, we used the LX Parser, and for dependency, the DepPattern toolkit BIBREF16 ."
],
"extractive_spans": [
"We performed the annotation with freely available tools for the Portuguese language."
],
"free_form_answer": "",
"highlighted_evidence": [
"We annotated the dataset at two levels: Part of Speech (POS) and syntax. We performed the annotation with freely available tools for the Portuguese language. For POS we added a simple POS, that is, only type of word, and a fine-grained POS, which is the type of word plus its morphological features. We used the LX Parser BIBREF14 , for the simple POS and the Portuguese morphological module of Freeling BIBREF15 , for detailed POS. Concerning syntactic annotations, we included constituency and dependency annotations. For constituency parsing, we used the LX Parser, and for dependency, the DepPattern toolkit BIBREF16 ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"057bf5a20e4406f1f05cf82ecd49cf4f227dd287",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"905966bd0cd6dd807e2c7c9cbf486d9c4a227609"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"057bf5a20e4406f1f05cf82ecd49cf4f227dd287"
]
},
{
"annotation_id": [
"9f767766a3ee52146caa157cf75b00d89ba8413f",
"f1b643b4b670c254d08965c0b12d672cf2f25afd"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 2: Distribution by L1s and source corpora."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: Distribution by L1s and source corpora."
],
"unanswerable": false,
"yes_no": false
},
{
"evidence": [
"FLOAT SELECTED: Table 2: Distribution by L1s and source corpora."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: Distribution by L1s and source corpora."
],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"057bf5a20e4406f1f05cf82ecd49cf4f227dd287"
]
},
{
"annotation_id": [
"6ed2d4f1cff3802965172fe140b15285dc07f78c",
"dee5ed78bbe5cecbc97f4623067a7a56c608fe37"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Figure 2: Histogram of document lengths, as measured by the number of tokens. The mean value is 204 with standard deviation of 103."
],
"extractive_spans": [],
"free_form_answer": "204 tokens",
"highlighted_evidence": [
"FLOAT SELECTED: Figure 2: Histogram of document lengths, as measured by the number of tokens. The mean value is 204 with standard deviation of 103."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Due to the different distribution of topics in the source corpora, the 148 topics in the dataset are not represented uniformly. Three topics account for a 48.7% of the total texts and, on the other hand, a 72% of the topics are represented by 1-10 texts (Figure 1 ). This variability affects also text length. The longest text has 787 tokens and the shortest has only 16 tokens. Most texts, however, range roughly from 150 to 250 tokens. To better understand the distribution of texts in terms of word length we plot a histogram of all texts with their word length in bins of 10 (1-10 tokens, 11-20 tokens, 21-30 tokens and so on) (Figure 2 )."
],
"extractive_spans": [
"Most texts, however, range roughly from 150 to 250 tokens."
],
"free_form_answer": "",
"highlighted_evidence": [
"The longest text has 787 tokens and the shortest has only 16 tokens. Most texts, however, range roughly from 150 to 250 tokens."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"057bf5a20e4406f1f05cf82ecd49cf4f227dd287",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"Are the annotations automatic or manually created?",
"Do the errors of the model reflect linguistic similarity between different L1s?",
"Is the dataset balanced between speakers of different L1s?",
"How long are the essays on average?"
],
"question_id": [
"7793805982354947ea9fc742411bec314a6998f6",
"007b13f05d234d37966d1aa7d85b5fd78564ff45",
"2ceced87af4c8fdebf2dc959aa700a5c95bd518f",
"72ed5fed07ace5e3ffe9de6c313625705bc8f0c7"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Table 1: Distribution of the dataset: Number of texts, tokens, types, and type/token ratio (TTER) per source corpus.",
"Figure 1: Topic distribution by number of texts. Each bar represents one of the 148 topics.",
"Table 2: Distribution by L1s and source corpora.",
"Table 3: Number of different topics by source.",
"Figure 2: Histogram of document lengths, as measured by the number of tokens. The mean value is 204 with standard deviation of 103.",
"Table 4: Distribution by proficiency levels and by source corpus."
],
"file": [
"2-Table1-1.png",
"3-Figure1-1.png",
"3-Table2-1.png",
"3-Table3-1.png",
"4-Figure2-1.png",
"4-Table4-1.png"
]
} | [
"Are the annotations automatic or manually created?",
"How long are the essays on average?"
] | [
[
"1804.11346-Preprocessing and annotation of texts-1"
],
[
"1804.11346-Collection methodology-4",
"1804.11346-4-Figure2-1.png"
]
] | [
"Automatic",
"204 tokens"
] | 141 |
1611.08661 | Knowledge Graph Representation with Jointly Structural and Textual Encoding | The objective of knowledge graph embedding is to encode both entities and relations of knowledge graphs into continuous low-dimensional vector spaces. Previously, most works focused on symbolic representation of knowledge graph with structure information, which can not handle new entities or entities with few facts well. In this paper, we propose a novel deep architecture to utilize both structural and textual information of entities. Specifically, we introduce three neural models to encode the valuable information from text description of entity, among which an attentive model can select related information as needed. Then, a gating mechanism is applied to integrate representations of structure and text into a unified architecture. Experiments show that our models outperform baseline by margin on link prediction and triplet classification tasks. Source codes of this paper will be available on Github. | {
"paragraphs": [
[
"Knowledge graphs have been proved to benefit many artificial intelligence applications, such as relation extraction, question answering and so on. A knowledge graph consists of multi-relational data, having entities as nodes and relations as edges. An instance of fact is represented as a triplet (Head Entity, Relation, Tail Entity), where the Relation indicates a relationship between these two entities. In the past decades, great progress has been made in building large scale knowledge graphs, such as WordNet BIBREF0 , Freebase BIBREF1 . However, most of them have been built either collaboratively or semi-automatically and as a result, they often suffer from incompleteness and sparseness.",
"The knowledge graph completion is to predict relations between entities based on existing triplets in a knowledge graph. Recently, a new powerful paradigm has been proposed to encode every element (entity or relation) of a knowledge graph into a low-dimensional vector space BIBREF2 , BIBREF3 . The representations of entities and relations are obtained by minimizing a global loss function involving all entities and relations. Therefore, we can do reasoning over knowledge graphs through algebraic computations.",
"Although existing methods have good capability to learn knowledge graph embeddings, it remains challenging for entities with few or no facts BIBREF4 . To solve the issue of KB sparsity, many methods have been proposed to learn knowledge graph embeddings by utilizing related text information BIBREF5 , BIBREF6 , BIBREF7 . These methods learn joint embedding of entities, relations, and words (or phrases, sentences) into the same vector space. However, there are still three problems to be solved. (1) The combination methods of the structural and textual representations are not well studied in these methods, in which two kinds of representations are merely aligned on word level or separate loss function. (2) The text description may represent an entity from various aspects, and various relations only focus on fractional aspects of the description. A good encoder should select the information from text in accordance with certain contexts of relations. Figure 1 illustrates the fact that not all information provided in its description are useful to predict the linked entities given a specific relation. (3) Intuitively, entities with many facts depend more on well-trained structured representation while those with few or no facts might be largely determined by text descriptions. A good representation should learn the most valuable information by balancing both sides.",
"In this paper, we propose a new deep architecture to learn the knowledge representation by utilizing the existing text descriptions of entities. Specifically, we learn a joint representation of each entity from two information sources: one is structure information, and another is its text description. The joint representation is the combination of the structure and text representations with a gating mechanism. The gate decides how much information from the structure or text representation will carry over to the final joint representation. In addition, we also introduce an attention mechanism to select the most related information from text description under different contexts. Experimental results on link prediction and triplet classification show that our joint models can handle the sparsity problem well and outperform the baseline method on all metrics with a large margin.",
"Our contributions in this paper are summarized as follows."
],
[
"In this section, we briefly introduce the background knowledge about the knowledge graph embedding.",
"Knowledge graph embedding aims to model multi-relational data (entities and relations) into a continuous low-dimensional vector space. Given a pair of entities $(h,t)$ and their relation $r$ , we can represent them with a triple $(h,r,t)$ . A score function $f(h,r, t)$ is defined to model the correctness of the triple $(h,r,t)$ , thus to distinguish whether two entities $h$ and $t$ are in a certain relationship $r$ . $f(h,r, t)$ should be larger for a golden triplet $(h, r, t)$ that corresponds to a true fact in real world, otherwise $r$0 should be lower for an negative triplet.",
"The difference among the existing methods varies between linear BIBREF2 , BIBREF8 and nonlinear BIBREF3 score functions in the low-dimensional vector space.",
"Among these methods, TransE BIBREF2 is a simple and effective approach, which learns the vector embeddings for both entities and relationships. Its basic idea is that the relationship between two entities is supposed to correspond to a translation between the embeddings of entities, that is, $\\textbf {h}+ \\mathbf {r}\\approx \\mathbf {t}$ when $(h,r,t)$ holds.",
"TransE's score function is defined as: ",
"$$f(h,r,t)) &= -\\Vert \\textbf {h}+\\mathbf {r}-\\mathbf {t}\\Vert _{2}^2$$ (Eq. 5) ",
" where $\\textbf {h},\\mathbf {t},\\mathbf {r}\\in \\mathbb {R}^d$ are embeddings of $h,t,r$ respectively, and satisfy $\\Vert \\textbf {h}\\Vert ^2_2=\\Vert \\mathbf {t}\\Vert ^2_2=1$ . The $\\textbf {h}, \\mathbf {r}, \\mathbf {t}$ are indexed by a lookup table respectively."
],
[
"Given an entity in most of the existing knowledge bases, there is always an available corresponding text description with valuable semantic information for this entity, which can provide beneficial supplement for entity representation.",
"To encode the representation of a entity from its text description, we need to encode the variable-length sentence to a fixed-length vector. There are several kinds of neural models used in sentence modeling. These models generally consist of a projection layer that maps words, sub-word units or n-grams to vector representations (often trained beforehand with unsupervised methods), and then combine them with the different architectures of neural networks, such as neural bag-of-words (NBOW), recurrent neural network (RNN) BIBREF9 , BIBREF10 , BIBREF11 and convolutional neural network (CNN) BIBREF12 , BIBREF13 .",
"In this paper, we use three encoders (NBOW, LSTM and attentive LSTM) to model the text descriptions."
],
[
"A simple and intuitive method is the neural bag-of-words (NBOW) model, in which the representation of text can be generated by summing up its constituent word representations.",
"We denote the text description as word sequence $x_{1:n} = x_1,\\cdots ,x_n$ , where $x_i$ is the word at position $i$ . The NBOW encoder is ",
"$$\\mathrm {enc_1}(x_{1:n}) = \\sum _{i=1}^{n} \\mathbf {x}_i,$$ (Eq. 7) ",
" where $\\mathbf {x}_i \\in \\mathbb {R}^d$ is the word embedding of $x_i$ ."
],
[
"To address some of the modelling issues with NBOW, we consider using a bidirectional long short-term memory network (LSTM) BIBREF14 , BIBREF15 to model the text description.",
"LSTM was proposed by BIBREF16 to specifically address this issue of learning long-term dependencies BIBREF17 , BIBREF18 , BIBREF16 in RNN. The LSTM maintains a separate memory cell inside it that updates and exposes its content only when deemed necessary.",
"Bidirectional LSTM (BLSTM) can be regarded as two separate LSTMs with different directions. One LSTM models the text description from left to right, and another LSTM models text description from right to left respectively. We define the outputs of two LSTM at time step $i$ are $\\overrightarrow{\\mathbf {z}}_i$ and $\\overleftarrow{\\mathbf {z}}_i$ respectively.",
"The combined output of BLSTM at position $i$ is ${\\mathbf {z}_i} = \\overrightarrow{\\mathbf {z}}_i \\oplus \\overleftarrow{\\mathbf {z}}_i$ , where $\\oplus $ denotes the concatenation operation.",
"The LSTM encoder combines all the outputs $\\mathbf {z}_i \\in \\mathbb {R}^d$ of BLSTM at different position. ",
"$$\\mathrm {enc_2}(x_{1:n}) = \\sum _{i=1}^{n} {\\mathbf {z}_i}.$$ (Eq. 9) "
],
[
"While the LSTM encoder has richer capacity than NBOW, it produces the same representation for the entire text description regardless of its contexts. However, the text description may present an entity from various aspects, and various relations only focus on fractional aspects of the description. This phenomenon also occurs in structure embedding for an entity BIBREF8 , BIBREF19 .",
"Given a relation for an entity, not all of words/phrases in its text description are useful to model a specific fact. Some of them may be important for the given relation, but may be useless for other relations. Therefore, we introduce an attention mechanism BIBREF20 to utilize an attention-based encoder that constructs contextual text encodings according to different relations.",
"For each position $i$ of the text description, the attention for a given relation $r$ is defined as $\\alpha _i(r)$ , which is ",
"$$e_i(r) &= \\mathbf {v}_a^T \\tanh (\\mathbf {W}_a {\\mathbf {z}}_i + \\mathbf {U}_a \\mathbf {r}), \\\\\n\\alpha _i(r)&=\\operatorname{\\mathbf {softmax}}(e_i(r))\\nonumber \\\\\n&=\\frac{\\exp (e_i(r))}{\\sum ^{n}_{j=1} \\exp (e_j(r))},$$ (Eq. 12) ",
" where $\\mathbf {r}\\in \\mathbb {R}^d$ is the relation embedding; ${\\mathbf {z}}_i \\in \\mathbb {R}^d$ is the output of BLSTM at position $i$ ; $\\mathbf {W}_a,\\mathbf {U}_a \\in \\mathbb {R}^{d\\times d}$ are parameters matrices; $\\mathbf {v}_a \\in \\mathbb {R}^{d}$ is a parameter vector.",
"The attention $\\alpha _i(r)$ is interpreted as the degree to which the network attends to partial representation $\\mathbf {z}_{i}$ for given relation $r$ .",
"The contextual encoding of text description can be formed by a weighted sum of the encoding $\\mathbf {z}_{i}$ with attention. ",
"$$\\mathbf {enc_3}(x_{1:n};r) &= \\sum _{i=1}^{n} \\alpha _i(r) * \\mathbf {z}_i.$$ (Eq. 13) "
],
[
"Since both the structure and text description provide valuable information for an entity , we wish to integrate all these information into a joint representation.",
"We propose a united model to learn a joint representation of both structure and text information. The whole model can be end-to-end trained.",
"For an entity $e$ , we denote $\\mathbf {e}_s$ to be its embedding of structure information, $\\mathbf {e}_d$ to be encoding of its text descriptions. The main concern is how to combine $\\mathbf {e}_s$ and $\\mathbf {e}_d$ .",
"To integrate two kinds of representations of entities, we use gating mechanism to decide how much the joint representation depends on structure or text.",
"The joint representation $\\mathbf {e}$ is a linear interpolation between the $\\mathbf {e}_s$ and $\\mathbf {e}_d$ . ",
"$$\\mathbf {e}= \\textbf {g}_e \\odot \\mathbf {e}_s + (1-\\textbf {g}_e)\\odot \\mathbf {e}_d,$$ (Eq. 14) ",
" where $\\textbf {g}_e$ is a gate to balance two sources information and its elements are in $[0,1]$ , and $\\odot $ is an element-wise multiplication. Intuitively, when the gate is close to 0, the joint representation is forced to ignore the structure information and is the text representation only."
],
[
"We use the contrastive max-margin criterion BIBREF2 , BIBREF3 to train our model. Intuitively, the max-margin criterion provides an alternative to probabilistic, likelihood-based estimation methods by concentrating directly on the robustness of the decision boundary of a model BIBREF23 . The main idea is that each triplet $(h,r,t)$ coming from the training corpus should receives a higher score than a triplet in which one of the elements is replaced with a random elements.",
"We assume that there are $n_t$ triplets in training set and denote the $i$ th triplet by $(h_i, r_i, t_i),(i = 1, 2, \\cdots ,n_t)$ . Each triplet has a label $y_i$ to indicate the triplet is positive ( $y_i = 1$ ) or negative ( $y_i = 0$ ).",
"Then the golden and negative triplets are denoted by $\\mathcal {D} = \\lbrace (h_j, r_j, t_j) | y_j = 1\\rbrace $ and $\\mathcal {\\hat{D}} = \\lbrace (h_j, r_j, t_j) | y_j = 0\\rbrace $ , respectively. The positive example are the triplets from training dataset, and the negative examples are generated as follows: $ \\mathcal {\\hat{D}} = \\lbrace (h_l, r_k, t_k) | h_l \\ne h_k \\wedge y_k = 1\\rbrace \\cup \\lbrace (h_k, r_k, t_l) | t_l \\ne t_k \\wedge y_k = 1\\rbrace \\cup \\lbrace (h_k, r_l, t_k) | r_l \\ne r_k \\wedge y_k = 1\\rbrace $ . The sampling strategy is Bernoulli distribution described in BIBREF8 . Let the set of all parameters be $\\Theta $ , we minimize the following objective: ",
"$$J(\\Theta )=\\sum _{(h,r,t) \\in \\mathcal {D}}\\sum _{( \\hat{h},\\hat{r},\\hat{t}) \\in \\mathcal {\\hat{D}}} \\max \\left(0,\\gamma - \\right. \\nonumber \\\\\nf( h,r,t)+f(\\hat{h},\\hat{r},\\hat{t})\\left.\\right)+ \\eta \\Vert \\Theta \\Vert _2^2,$$ (Eq. 22) ",
"where $\\gamma > 0$ is a margin between golden triplets and negative triplets., $f(h, r, t)$ is the score function. We use the standard $L_2$ regularization of all the parameters, weighted by the hyperparameter $\\eta $ ."
],
[
"In this section, we study the empirical performance of our proposed models on two benchmark tasks: triplet classification and link prediction."
],
[
"We use two popular knowledge bases: WordNet BIBREF0 and Freebase BIBREF1 in this paper. Specifically, we use WN18 (a subset of WordNet) BIBREF24 and FB15K (a subset of Freebase) BIBREF2 since their text descriptions are easily publicly available. Table 1 lists statistics of the two datasets."
],
[
"Link prediction is a subtask of knowledge graph completion to complete a triplet $(h, r, t)$ with $h$ or $t$ missing, i.e., predict $t$ given $(h, r)$ or predict $h$ given $(r, t)$ . Rather than requiring one best answer, this task emphasizes more on ranking a set of candidate entities from the knowledge graph.",
"Similar to BIBREF2 , we use two measures as our evaluation metrics. (1) Mean Rank: the averaged rank of correct entities or relations; (2) Hits@p: the proportion of valid entities or relations ranked in top $p$ predictions. Here, we set $p=10$ for entities and $p=1$ for relations. A lower Mean Rank and a higher Hits@p should be achieved by a good embedding model. We call this evaluation setting “Raw”. Since a false predicted triplet may also exist in knowledge graphs, it should be regard as a valid triplet. Hence, we should remove the false predicted triplets included in training, validation and test sets before ranking (except the test triplet of interest). We call this evaluation setting “Filter”. The evaluation results are reported under these two settings.",
"We select the margin $\\gamma $ among $\\lbrace 1, 2\\rbrace $ , the embedding dimension $d$ among $\\lbrace 20, 50, 100\\rbrace $ , the regularization $\\eta $ among $\\lbrace 0, 1E{-5}, 1E{-6}\\rbrace $ , two learning rates $\\lambda _s$ and $\\lambda _t$ among $\\lbrace 0.001, 0.01, 0.05\\rbrace $ to learn the parameters of structure and text encoding. The dissimilarity measure is set to either $L_1$ or $\\lbrace 1, 2\\rbrace $0 distance.",
"In order to speed up the convergence and avoid overfitting, we initiate the structure embeddings of entity and relation with the results of TransE. The embedding of a word is initialized by averaging the linked entity embeddings whose description include this word. The rest parameters are initialized by randomly sampling from uniform distribution in $[-0.1, 0.1]$ .",
"The final optimal configurations are: $\\gamma = 2$ , $d=20$ , $\\eta =1E{-5}$ , $\\lambda _s = 0.01$ , $\\lambda _t = 0.1$ , and $L_1$ distance on WN18; $\\gamma =2$ , $d=100$ , $\\eta =1E{-5}$ , $\\lambda _s = 0.01$ , $d=20$0 , and $d=20$1 distance on FB15K.",
"Experimental results on both WN18 and FB15k are shown in Table 2 , where we use “Jointly(CBOW)”, “Jointly(LSTM)” and “Jointly(A-LSTM)” to represent our jointly encoding models with CBOW, LSTM and attentive LSTM text encoders. Our baseline is TransE since that the score function of our models is based on TransE.",
"From the results, we observe that proposed models surpass the baseline, TransE, on all metrics, which indicates that knowledge representation can benefit greatly from text description.",
"On WN18, the reason why “Jointly(A-LSTM)” is slightly worse than “Jointly(LSTM)” is probably because the number of relations is limited. Therefore, the attention mechanism does not have obvious advantage. On FB15K, “Jointly(A-LSTM)” achieves the best performance and is significantly higher than baseline methods on mean rank.",
"Although the Hits@10 of our models are worse than the best state-of-the-art method, TransD, it is worth noticing that the score function of our models is based on TransE, not TransD. Our models are compatible with other state-of-the-art knowledge embedding models. We believe that our model can be further improved by adopting the score functions of other state-of-the-art methods, such as TransD.",
"Besides, textual information largely alleviates the issue of sparsity and our model achieves substantial improvement on Mean Rank comparing with TransD. However, textual information may slightly degrade the representation of frequent entities which have been well-trained. This may be another reason why our Hits@10 is worse than TransD which only utilizes structural information.",
"For the comparison of Hits@10 of different kinds of relations, we categorized the relationships according to the cardinalities of their head and tail arguments into four classes: 1-to-1, 1-to-many, many-to-1, many-to-many. Mapping properties of relations follows the same rules in BIBREF2 .",
"Table 3 shows the detailed results by mapping properties of relations on FB15k. We can see that our models outperform baseline TransE in all types of relations (1-to-1, 1-to-N, N-to-1 and N-to-N), especially when (1) predicting “1-to-1” relations and (2) predicting the 1 side for “1-to-N” and “N-to-1” relations.",
"To get more insights into how the joint representation is influenced by the structure and text information. We observe the activations of gates, which control the balance between two sources of information, to understand the behavior of neurons. We sort the entities by their frequencies and divide them into 50 equal-size groups of different frequencies, and average the values of all gates in each group.",
"Figure 3 gives the average of gates in ten groups from high- to low-frequency. We observe that the text information play more important role for the low-frequency entities."
],
[
"Triplet classification is a binary classification task, which aims to judge whether a given triplet $(h, r, t)$ is a correct fact or not. Since our used test sets (WN18 and FB15K) only contain correct triplets, we construct negative triplets following the same setting used in BIBREF3 .",
"For triplets classification, we set a threshold $\\delta _r$ for each relation $r$ . $\\delta _r$ is obtained by maximizing the classification accuracies on the valid set. For a given triplet $(h, r, t)$ , if its score is larger than $\\delta _r$ , it will be classified as positive, otherwise negative.",
"Table 4 shows the evaluation results of triplets classification. The results reveal that our joint encoding models is effective and also outperform the baseline method.",
"On WN18, “Jointly(A-LSTM)” achieves the best performance, and the “Jointly(LSTM)” is slightly worse than “Jointly(A-LSTM)”. The reason is that the number of relations is relatively small. Therefore, the attention mechanism does not show obvious advantage. On FB15K, the classification accuracy of “Jointly(A-LSTM)” achieves 91.5%, which is the best and significantly higher than that of state-of-the-art methods."
],
[
"Recently, it has gained lots of interests to jointly learn the embeddings of knowledge graph and text information. There are several methods using textual information to help KG representation learning.",
" BIBREF3 represent an entity as the average of its word embeddings in entity name, allowing the sharing of textual information located in similar entity names.",
" BIBREF5 jointly embed knowledge and text into the same space by aligning the entity name and its Wikipedia anchor, which brings promising improvements to the accuracy of predicting facts. BIBREF6 extend the joint model and aligns knowledge and words in the entity descriptions. However, these two works align the two kinds of embeddings on word level, which can lose some semantic information on phrase or sentence level.",
" BIBREF25 also represent entities with entity names or the average of word embeddings in descriptions. However, their use of descriptions neglects word orders, and the use of entity names struggles with ambiguity. BIBREF7 jointly learn knowledge graph embeddings with entity descriptions. They use continuous bag-of-words and convolutional neural network to encode semantics of entity descriptions. However, they separate the objective functions into two energy functions of structure-based and description-based representations. BIBREF26 embeds both entity and relation embeddings by taking KG and text into consideration using CNN. To utilize both representations, they need further estimate an optimum weight coefficients to combine them together in the specific tasks.",
"Besides entity representation, there are also a lot of works BIBREF27 , BIBREF28 , BIBREF29 to map textual relations and knowledge base relations to the same vector space and obtained substantial improvements.",
"While releasing the current paper we discovered a paper by BIBREF30 proposing a similar model with attention mechanism which is evaluated on link prediction and triplet classification. However, our work encodes text description as a whole without explicit segmentation of sentences, which breaks the order and coherence among sentences."
],
[
"We propose a united representation for knowledge graph, utilizing both structure and text description information of the entities. Experiments show that our proposed jointly representation learning with gating mechanism is effective, which benefits to modeling the meaning of an entity.",
"In the future, we will consider the following research directions to improve our model:"
]
],
"section_name": [
"Introduction",
"Knowledge Graph Embedding",
"Neural Text Encoding",
"Bag-of-Words Encoder",
"LSTM Encoder",
"Attentive LSTM Encoder",
"Joint Structure and Text Encoder",
"Training",
"Experiment",
"Datasets",
"Link Prediction",
"Triplet Classification",
"Related Work",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"86bf8a57a05c1af4857645ee23cba24f8607bf27",
"afa4551fcc1888e5b276e19d19fd6f2b367dab76"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"7803ba8358058c0f83a7d1e93e15ad3f404db5a5",
"057bf5a20e4406f1f05cf82ecd49cf4f227dd287"
]
},
{
"annotation_id": [
"5d770d2c2154b0d6cfc8c60916c80bdb21622c78",
"64cd7b0c99ea6f24556b7cbdec280552e063f42e"
],
"answer": [
{
"evidence": [
"In this paper, we use three encoders (NBOW, LSTM and attentive LSTM) to model the text descriptions."
],
"extractive_spans": [],
"free_form_answer": "NBOW, LSTM, attentive LSTM",
"highlighted_evidence": [
"In this paper, we use three encoders (NBOW, LSTM and attentive LSTM) to model the text descriptions."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"A simple and intuitive method is the neural bag-of-words (NBOW) model, in which the representation of text can be generated by summing up its constituent word representations.",
"To address some of the modelling issues with NBOW, we consider using a bidirectional long short-term memory network (LSTM) BIBREF14 , BIBREF15 to model the text description.",
"Given a relation for an entity, not all of words/phrases in its text description are useful to model a specific fact. Some of them may be important for the given relation, but may be useless for other relations. Therefore, we introduce an attention mechanism BIBREF20 to utilize an attention-based encoder that constructs contextual text encodings according to different relations."
],
"extractive_spans": [
"neural bag-of-words (NBOW) model",
"bidirectional long short-term memory network (LSTM)",
"attention-based encoder"
],
"free_form_answer": "",
"highlighted_evidence": [
"A simple and intuitive method is the neural bag-of-words (NBOW) model, in which the representation of text can be generated by summing up its constituent word representations.",
"To address some of the modelling issues with NBOW, we consider using a bidirectional long short-term memory network (LSTM) BIBREF14 , BIBREF15 to model the text description.",
"Therefore, we introduce an attention mechanism BIBREF20 to utilize an attention-based encoder that constructs contextual text encodings according to different relations."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"057bf5a20e4406f1f05cf82ecd49cf4f227dd287",
"7803ba8358058c0f83a7d1e93e15ad3f404db5a5"
]
},
{
"annotation_id": [
"05ba48f4452de551374de975a8127fc733e45368",
"5c634285acb50aae76fb79f9b7eb5927cc83a120"
],
"answer": [
{
"evidence": [
"Experimental results on both WN18 and FB15k are shown in Table 2 , where we use “Jointly(CBOW)”, “Jointly(LSTM)” and “Jointly(A-LSTM)” to represent our jointly encoding models with CBOW, LSTM and attentive LSTM text encoders. Our baseline is TransE since that the score function of our models is based on TransE."
],
"extractive_spans": [
"TransE"
],
"free_form_answer": "",
"highlighted_evidence": [
"Our baseline is TransE since that the score function of our models is based on TransE."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Experimental results on both WN18 and FB15k are shown in Table 2 , where we use “Jointly(CBOW)”, “Jointly(LSTM)” and “Jointly(A-LSTM)” to represent our jointly encoding models with CBOW, LSTM and attentive LSTM text encoders. Our baseline is TransE since that the score function of our models is based on TransE."
],
"extractive_spans": [
"TransE"
],
"free_form_answer": "",
"highlighted_evidence": [
"Our baseline is TransE since that the score function of our models is based on TransE."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"057bf5a20e4406f1f05cf82ecd49cf4f227dd287",
"7803ba8358058c0f83a7d1e93e15ad3f404db5a5"
]
},
{
"annotation_id": [
"de0f3eda79d2e5b2932f775111b2d1df7102ba01"
],
"answer": [
{
"evidence": [
"We use two popular knowledge bases: WordNet BIBREF0 and Freebase BIBREF1 in this paper. Specifically, we use WN18 (a subset of WordNet) BIBREF24 and FB15K (a subset of Freebase) BIBREF2 since their text descriptions are easily publicly available. Table 1 lists statistics of the two datasets."
],
"extractive_spans": [
"WordNet BIBREF0",
"Freebase BIBREF1",
"WN18 (a subset of WordNet) BIBREF24 ",
"FB15K (a subset of Freebase) BIBREF2"
],
"free_form_answer": "",
"highlighted_evidence": [
"We use two popular knowledge bases: WordNet BIBREF0 and Freebase BIBREF1 in this paper. Specifically, we use WN18 (a subset of WordNet) BIBREF24 and FB15K (a subset of Freebase) BIBREF2 since their text descriptions are easily publicly available. Table 1 lists statistics of the two datasets."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"7803ba8358058c0f83a7d1e93e15ad3f404db5a5"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"How large are the textual descriptions of entities?",
"What neural models are used to encode the text?",
"What baselines are used for comparison?",
"What datasets are used to evaluate this paper?"
],
"question_id": [
"2e37e681942e28b5b05639baaff4cd5129adb5fb",
"b49598b05358117ab1471b8ebd0b042d2f04b2a4",
"932b39fd6c47c6a880621a62e6a978491d881d60",
"b36f867fcda5ad62c46d23513369337352aa01d2"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"link prediction",
"link prediction",
"link prediction",
"link prediction"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 2: Our general architecture of jointly structural and textual encoding.",
"Table 1: Statistics of datasets used in experiments.",
"Table 2: Results on link prediction.",
"Table 3: Detailed results by category of relationship on FB15K.",
"Figure 3: Detailed results by entity frequency on FB15K.",
"Figure 4: Visualization of gates with different entity frequencies on FB15K.",
"Table 4: Results on triplet classification."
],
"file": [
"3-Figure2-1.png",
"3-Table1-1.png",
"4-Table2-1.png",
"5-Table3-1.png",
"5-Figure3-1.png",
"5-Figure4-1.png",
"6-Table4-1.png"
]
} | [
"What neural models are used to encode the text?"
] | [
[
"1611.08661-Neural Text Encoding-2",
"1611.08661-LSTM Encoder-0",
"1611.08661-Bag-of-Words Encoder-0",
"1611.08661-Attentive LSTM Encoder-1"
]
] | [
"NBOW, LSTM, attentive LSTM"
] | 142 |
1910.07601 | Contextual Joint Factor Acoustic Embeddings | Embedding acoustic information into fixed length representations is of interest for a whole range of applications in speech and audio technology. We propose two novel unsupervised approaches to generate acoustic embeddings by modelling of acoustic context. The first approach is a contextual joint factor synthesis encoder, where the encoder in an encoder/decoder framework is trained to extract joint factors from surrounding audio frames to best generate the target output. The second approach is a contextual joint factor analysis encoder, where the encoder is trained to analyse joint factors from the source signal that correlates best with the neighbouring audio. To evaluate the effectiveness of our approaches compared to prior work, we chose two tasks - phone classification and speaker recognition - and test on different TIMIT data sets. Experimental results show that one of our proposed approaches outperforms phone classification baselines, yielding a classification accuracy of 74.1%. When using additional out-of-domain data for training, an additional 2-3% improvements can be obtained, for both for phone classification and speaker recognition tasks. | {
"paragraphs": [
[
"In recent years, word embeddings have been successfully used in natural language processing (NLP), the most commonly known models are Word2Vec BIBREF0 and GloveBIBREF1. The reasons for such success are manifold. One key attribute of embedding methods is that word embedding models take into account context information of words, thereby allowing a more compact and manageable representation for wordsBIBREF2, BIBREF3. The embeddings are widely applied in many downstream NLP tasks such as neural machine translation, dialogue system or text summarisation BIBREF4, BIBREF5, BIBREF6, as well as in language modelling for speech recognitionBIBREF7.",
"Embeddings of acoustic (and speech) signals are of more recent interest. The objective is to represent audio sequence information in compact form, replacing the raw audio data with one that contains latent factors BIBREF8, BIBREF9. The projection into such (latent) spaces should take into account different attributes, such as phonemes, speaker properties, speaking styles, the acoustic background or the recording environment. Acoustic embeddings have been explored for a variety of speech tasks such as speech recognitionBIBREF10, speaker verificationBIBREF11 or voice conversionBIBREF12. However, learning acoustic embeddings is challenging: attributes mentioned above, e.g. speaker properties and phonemes, operate at different levels of abstraction and are often strongly interdependent, and therefore are difficult to extract and represent in a meaningful formBIBREF8.",
"For speech processing BIBREF13, BIBREF14, BIBREF15 also use context information to derive acoustic embeddings. However, BIBREF13, BIBREF14 focus on learning word semantic representations from raw audio instead of signal properties such as phonemes and speaker properties. BIBREF15 focus on learning speaker representations by modelling of context information with a Siamese networks that discriminate whether a speech segment is the neighbourhood of a target segment or not.",
"In this paper, two unsupervised approaches to generate acoustic embeddings using context modelling are proposed. Both methods make use of the variational auto-encoder framework as proposed inBIBREF16 and both approaches aim to find joint latent variables between the target acoustic segments and its surrounding frames. In the first instance a representation is derived from surrounding audio frames that allows to predict current frame, thereby generating target audio from common factors. The encoder element of the associated auto-encoder is further referred to as Contextual Joint Factor Synthesis (CJFS) encoder. In the second instance an audio frame is used to predict surrounding audio, which is further referred to as Contextual Joint Factor Analysis (CJFA) encoding. As shown in previous work variational auto-encoders can be used to derive latent variables such as speaker information and phonemesBIBREF8 more robustly. In this work it is shown that including temporal information can further improve performance and robustness, for both phoneme classification and speaker identification tasks. Furthermore the use of additional unlabelled out-of-domain data can improved modelling for the proposed approaches. As outlined above, prior work has made use of surrounding audio in different forms. To the best of our knowledge this work is the first to show that predicting surrounding audio allows for efficient extraction of latent factors in speech signals.",
"The rest of paper is organised as follows: In §SECREF2 related work is described, Methods for deriving acoustic embeddings, and context modelling methods in NLP, computer vision and speech are discussed. This is followed by the description of the two approaches for modelling context as used in this work, in §SECREF3. The experimental framework is described in §SECREF4, including the data organisation, baseline design and task definitions; in §SECREF5 and §SECREF6 experiments results are shown and discussed. This is followed by the conclusions and future work in §SECREF7.",
""
],
[
""
],
[
"Most interest in acoustic embeddings can be observed on acoustic word embeddings, i.e. projections that map word acoustics into a fixed size vector space. Objective functions are chosen to project different word realisations to close proximity in the embedding space. Different approaches were used in the literature - for both supervised and unsupervised learning. For the supervised case, BIBREF9 introduced a convolutional neural network (CNN) based acoustic word embedding system for speech recognition, where words that sound alike are nearby in Euclidean distance. In their work, a CNN is used to predict a word from the corresponding acoustic signal, the output of the bottleneck layer before the final softmax layer is taken to be the embedding for the corresponding word. Further work used different network architectures to obtain acoustic word embeddings: BIBREF10 introduces a recurrent neural network (RNN) based approach instead.",
"For the case that the word boundary information is available but the word label itself is unknown, BIBREF12 proposed word similarity Siamese CNNs. These are used to minimise a distance function between representations of two instances of the same word type whilst at the same time maximising the distance between two instances of different words.",
"Unsupervised approaches also exist. BIBREF17 proposed a convolutional variational auto-encoder based approach to derive acoustic embedding, in unsupervised fashion. The authors chose phoneme and speaker classification tasks on TIMIT data to assess the quality of their embeddings - an approach replicated in the work presented in this paper. BIBREF8, BIBREF18 proposed an approach called factorised hierarchical variational auto-encoder. The work introduces the concepts of global and local latent factors, i.e. latent variables that are shared on the complete utterance, or latent variables that change within the sequence, respectively. Results are again obtained using the same data and tasks as above."
],
[
"Context information plays a fundamental role in speech processing. Phonemes could be influenced by surrounding frames through coarticulationBIBREF19 - an effect caused by speed limitations and transitions in the movement of articulators. Normally directly neighbouring phonemes have important impact on the sound realisation. Inversely, the surrounding phonemes also provide strong constraints on the phoneme that can be chosen at any given point, subject to to lexical and language constraints. This effect is for example exploited in phoneme recognition, by use of phoneme $n$-gram modelsBIBREF20. Equivalently inter word dependency - derived from linguistic constraints - can be exploited, as is the case in computing word embeddings with the aforementioned word2vecBIBREF0 method. The situation differs for the global latent variables, such as speaker properties or acoustic environment information. Speaker properties remains constant - and environments can also be assumed stationary over longer periods of time. Hence these variables are common between among neighbouring frames and windows. Modelling context information is helpful for identifying such information BIBREF21.",
"There is significant prior work that takes surrounding information into account to learn vector representations. For text processing the Word2VecBIBREF0 model directly predicts the neighbouring words from target words or inversely. This helps to capture the meanings of wordsBIBREF2. In computer vision, BIBREF22 introduced an visual feature learning approach called context encoder ,which is based on context based pixel prediction. Their model is trained to generate the contents of an image region from its surroundings. In speech processing BIBREF13, BIBREF14 proposed a sequence to sequence approach to predict surrounding segments of a target segment. However, the approach again aims at capturing word semantics from raw speech audio, words has similar semantic meanings are nearby in Euclidean distance. BIBREF15 proposed an unsupervised acoustic embedding approach. In their approach, instead of directly estimating the neighbourhood frames of a target segment, a Siamese architecture is used to discriminate whether a speech segment is in the neighbourhood of a target segment or not. Furthermore, their approach only aims at embedding of speaker properties. To the best of our knowledge, work presented here is the first derive phoneme and speaker representations by temporal context prediction using acoustic data."
],
[
"As shown in BIBREF17, variational auto-encoders (VAE)BIBREF8 can yield good representations in the latent space. One of the benefits is that the models allow to work with the latent distributionsBIBREF23, BIBREF8, BIBREF24. In this work, VAE is used to model the joint latent factors between the target segments and its surroundings.",
"Different from normal auto-encoders, where the input data is compressed into latent code which is a point estimation of latent variablesBIBREF16, the variational auto-encoder model defines a probabilistic generative process between the observation $x$ and the latent variable $z$. At the encoder step, the encoder provides an estimation of the latent variable $z$ given observation $x$ as $p(z|x)$. The decoder finds the most likely reconstruction $\\hat{x}$ subject to $p(\\hat{x}|z)$. The latent variable estimation $p(z|x)$, or the probability density function thereof, has many interpretations, simply as encoding, or as latent state space governing the construction of the original signal.",
"Computing $p(z|x)$ requires an estimate of the marginal likelihood $p(x)$ which is difficult to obtain in practice. A recognition model $q(z|x)$ is used to approximate $p(z|x)$ KL divergence between $p(z|x)$ and $q(z|x)$, as shown in Eq DISPLAY_FORM4, is minimisedBIBREF16.",
"From Eq DISPLAY_FORM4, the objective function for VAE training is derived shown in Eq DISPLAY_FORM5: BIBREF16, BIBREF17",
"where $E_{q(z|x)}log[p(x|z)]$ is also called the reconstruction likelihood and $ D_{KL}(q(z|x)||p(z))$ ensures the learned distribution $q(z|x)$ is close to prior distribution $p(z)$.",
""
],
[
"An audio signal is represented sequence of feature vectors $S=\\lbrace S_1,S_2,...S_T\\rbrace $, where $T$ is the length of the utterance. In the proposed method the concept of a target window is used, to which and embedding is related. A target window $X_t$ is a segment of speech representing features from $S_t$ to $S_{t+C-1}$, where $t \\in \\lbrace 1,2,...T-C+1\\rbrace $ and $C$ denotes the target window size. The left neighbour window of the target window is defined as the segment between $S_{t-N}$ and $S_{t-1}$, and the segment between $S_{t+C}$ and $S_{t+C+N-1}$ represents the right neighbour window of the target window, with $N$ being the single sided neighbour window size. The concatenation of left and right neighbour segments is further referred to $Y_t$. The proposed approach aims to find joint latent factors between target window segment $X_t$ and the concatenation of left and right neighbour window segments $Y_t$, for all segments. For convenience the subscript $t$ is dropped in following derivations where appropriate. Two different context use configurations can be used.",
"Figure FIGREF7 illustrate these two approaches. The audio signal is split into a sequence of left neighbour segment, target segment and right neighbour segment. In the first approach (left side on figure FIGREF7), the concatenation of the left neighbour segment and right neighbour segment ($Y$) is input to a VAE modelBIBREF16, and target window ($X$) is predicted. In the second approach (right side on figure FIGREF7) the target window ($X$) is the input to a VAE model, and neighbour window ($Y$) is predicted.",
"The first approach is referred to as the contextual joint factor synthesis encoder as it aims to synthesise the target frame $X$. Only factors common between input and output can form the basis for such prediction, and the encoded embedding can be considered a representation of these joint factors. Similar to the standard VAE formulations, the objective function of CJFS is given in Eq. DISPLAY_FORM8:",
"The first term represents the reconstruction likelihood between predicted target window segments and the neighbour window segments, and the second term denotes how similar the learned distribution $q(z|Y)$ is to the prior distribution of $z$, $p(z)$",
"In practice, for the reconstruction term can be based on the mean squared error (MSE) between the true target segment and the predicted target segment. For the second term in Eq DISPLAY_FORM8, samples for $p(z)$ are obtained from Gaussian distribution with zero mean and a variance of one ($p(z) \\sim \\mathcal {N} (0,1)$).",
"The second approach is the contextual joint factor analysis encoder. The objective is to predict the temporal context $Y$ based on input from a single centre segment $X$. Again joint factors between the three windows are obtained, and encoded in an embedding. However this time an analysis of one segment is enough. Naturally the training objective function of CJFA is represented by change of variables, as given in Eq DISPLAY_FORM9."
],
[
""
],
[
"Taking the VAE experiments as baseline, the TIMIT data is used for this workBIBREF25. TIMIT contains studio recordings from a large number of speakers with detailed phoneme segment information. Work in this paper makes use of the official training and test sets, covering in total 630 speakers with 8 utterances each. There is no speaker overlap between training and test set, which comprise of 462 and 168 speakers, respectively. All work presented here use of 80 dimensional Mel-scale filter bank coefficients.",
""
],
[
"Work on VAE in BIBREF17 to learn acoustic embeddings conducted experiments using the TIMIT data set. In particular the tasks of phone classification and speaker recognition where chosen. As work here is an extension of such work we we follow the experimentation, however with significant extensions (see Section SECREF13). With guidance from the authors of the original workBIBREF17 our own implementation of VAE was created and compared with the published performance - yielding near identical results. This implementation then was also used as the basis for CJFS and CJFA, as introduced in § SECREF6.",
"For the assessment of embedded vector quality our work also follows the same task types, namely phone classification and speaker recognition (details in §SECREF13), with identical task implementations as in the reference paper. It is important to note that phone classification differs from the widely reported phone recognition experiments on TIMIT. Classification uses phone boundaries which are assumed to be known. However, no contextual information is available, which is typically used in the recognition setups, by means of triphone models, or bigram language models. Therefore the task is often more difficult than recognition. The baseline performance for VAE based phone classification experiments in BIBREF17 report an accuracy of 72.2%. The re-implementation forming the basis for our work gave an accuracy of 72.0%, a result that was considered to provide a credible basis for further work.",
"For the purpose of speaker recognition it is important to take into account the overlap between training and testing. Thus three different task configurations are considered, different to the setting in BIBREF17. Their baseline will be further referred as VAE baseline.",
""
],
[
"The phone classification implementation operates on segment level, using a convolutional network to obtain frame by frame posteriors which are then accumulated for segment decision (assuming frame independence). The phone class with the highest segment posterior is chosen as output. An identical approach is used for speaker recognition. In this setting 3 different data sets are required: a training set for learning the encoder models, a training set for learning the classification model, and an evaluation test set. For the phone classification task, both embedding and classification models are trained on the official TIMIT training set, and makes use of the provided phone boundary information. A fixed size window with a frame step size of one frame is used for all model training. As noted, phone classification makes no use of phone context, and no language model is applied.",
"For speaker recognition overlap speaker between any of the datasets (training embeddings, training classifier and test) will cause a bias. Three different configurations (Tasks a,b,c) are used to assess this bias. Task a reflects the situation where both classifier and embedding are trained on the same data. As the task is to detect a speaker the speakers present in the test set need to be present in training. Task b represents a situation where classifier and embedding are trained on independent data sets, but with speaker overlap. Finally Task c represents complete independence in training data sets and no speaker overlap. Table TABREF15 summarises the relationships.",
"In order to achieve these configuration the TIMIT data was split. Fig. FIGREF12 illustrates the split of the data into 8 subsets (A–H). The TIMIT dataset contains speech from 462 speakers in training and 168 speakers in the test set, with 8 utterances for each speaker. The TIMIT training and test set are split into 8 blocks, where each block contains 2 utterances per speaker, randomly chosen. Thus each block A,B,C,D contains data from 462 speakers with 924 utterances taken from the training sets, and each block E,F,G,H contains speech from 168 test set speakers with 336 utterances.",
"For Task a training of embeddings and the classifier is identical, namely consisting of data from blocks (A+B+C+E+F+G). The test data is the remainder, namely blocks (D+H). For Task b the training of embeddings and classifiers uses (A+B+E+F) and (C+G) respectively, while again using (D+H) for test. Task c keeps both separate: embeddings are trained on (A+B+C+D), classifiers on (E+G) and tests are conducted on (F+H). Note that H is part of all tasks, and that Task c is considerably easier as the number of speakers to separate is only 168, although training conditions are more difficult.",
""
],
[
"For comparison the implementation, follows the convolutional model structure as deployed in BIBREF17. Both VAE encoder and decoder contain three convolutional layers and one fully-connected layer with 512 nodes. In the first layer of encoder, 1-by-80 filters are applied, and 3-by-1 filters are applied on the following two convolutional layer (strides was set to 1 in the first layer and 2 in the rest two layers). The decoder has the symmetric architecture to the encoder. Each layer is followed by a batch normalisation layerBIBREF26 except for the embedding layer, which is linear. Leaky ReLU activationBIBREF27 is used for each layer except for the embedding layer. The Adam optimiserBIBREF28 is used in training, with $\\beta _1$ set to 0.95, $\\beta _2$ to 0.999, and $\\epsilon $ is $10^{-8}$. The initial learning rate is $10^{-3}$",
""
],
[
"Table TABREF17 shows phone classification and speaker recognition results for the three model configurations: the VAE baseline, the CJFS encoder and the CJFA encoder. In our experiments the window size was set to 30 frames, namely 10 frames for the target and 10 frames for left and right neighbours, and an embedding dimension of 150. This was used for both CJFS and CJFA models alike. Results show that the CJFA encoder obtains significantly better phone classification accuracy than the VAE baseline and also than the CJFS encoder. These results are replicated for speaker recognition tasks. The CJFA encoder performs better on all tasks than the VAE baseline by a significant margin. It is noteworthy that performance on Task b is generally significantly lower than for Task a, for reasons of training overlap but also smaller training set sizes.",
"To further explore properties of the embedding systems a change of window size ($N$) and embedding dimension ($K$) is explored. One might argue that modelling context effectively widens the input data access. Hence these experiments should explore if there is benefit in the structure beyond data size. Graphs in Fig. FIGREF14 illustrate phone classification accuracy and speaker recognition performance for all three models under variation of latent size and window sizes. It is important to note that the target window size remains the same (10 frames) with an increase of $N$. Therefore e.g. $N=70$ describes the target window size is 10 frames, and the other two neighbour windows have 30 frames at either side (30,10,30 left to right). Better speaker recognition results are consistently obtained with the CJFA encoder for any configuration with competitive performance, compared with the VAE baseline and also CJFS settings - and CJFS settings mostly outperform the baseline. However the situation for phone classification is different. It is not surprising to see CJFS perform poorly on phone classification as the target frame in not present in the input, therefore the embedding just does not have the phone segment information. However, as per speaker recognition results, speaker information is retained.",
"A variation of the window sizes to larger windows seems detrimental in almost all cases, aside from the more difficult Task b. This may be in part the effect of the amount of training data available, however it confirms that contextual models outperform the baseline VAE model configuration, generally, and in particular also with the same amount of input data for speaker recognition. It is also noticeable that the decline or variation as a function of window size is less pronounced for the CJFA case, implying increased stability. For phone classification the trade-off benefit for window size is less clear.",
"For phone classification, increasing the embedding $K$ is helpful, but performance remains stable at $K=150$. Hence in all of the rest of our experiments, the embedding dimension is set to 150 for all of the rest configurations. For speaker recognition the observed variations are small.",
"A further set of experiments investigated the use of out of domain data for improving classification in a completely unsupervised setting. The RM corpus BIBREF29 was used in this case to augment the TIMIT data for training the embeddings only. All other configurations an training settings are unchanged. Table TABREF18 shows improvement after using additional out-of-domain data for training, except for in the case of CJFS and for phone classification. The improvement on all tasks with the simple addition of unlabelled audio data is remarkable. This is also true for the baseline, but the benefit of the proposed methods seems unaffected. The CJFA encoder performs better in comparison of the other two approaches and a absolute accuracy improvement of 7.9% for speaker recognition Task b is observed. The classification tasks benefits from the additional data even though the labelled data remains the same.",
""
],
[
"",
"To further evaluate the embeddings produced by the 3 models, visualisation using the t-SNE algorithmBIBREF30 is a common approach, although interpretation is sometimes difficult. Fig. FIGREF19 visualises the embeddings of phonemes in two-dimensional space, each phoneme symbol represents the mean vector of all of the embeddings belonging to the same phone classBIBREF31. One can observe that the CJFA encoder appears to generate more meaningful embeddings than the other two approaches - as phonemes belonging to the same sound classesBIBREF32 are grouped together in closer regions. The VAE baseline also has this behaviour but for example plosives are split and nasal separation seems less clear. Instead CJFS shows more confusion - as expected and explained above.",
""
],
[
"In this paper, two unsupervised acoustic embedding approaches to model the joint latent factors between the target window and neighbouring audio segments were proposed. Models are based on variational auto-encoders, which also constitute the baseline. In order to compare against the baseline models are assessed using phone classification and speaker recognition tasks, on TIMIT, and with additional RM data. Results show CJFA (contextual joint factor analysis) encoder performs significantly better in both phone classification and speaker recognition tasks compared with other two approaches. The CJFS (contextual joint factor synthesis) encoder performs close to CJFA in speaker recognition task, but poorer for phone classification. Overall a gain of up to 3% relative on phone classification accuracy is observed, relative improvements on speaker recognition show 3–6% gain. The proposed unsupervised approaches obtain embeddings and can be improved with unlabelled out-of-domain data, the classification tasks benefits even though the labelled data remains the same. Further work needs to expand experiments on larger data sets, phone recognition and more complex neural network architectures."
]
],
"section_name": [
"Introduction",
"Related Works",
"Related Works ::: Acoustic Embeddings",
"Related Works ::: Context Modelling",
"Model Architecture ::: Variational Auto-Encoders",
"Model Architecture ::: Proposed Model Architecture",
"Experimental Framework",
"Experimental Framework ::: Data and Use",
"Experimental Framework ::: Baselines",
"Experimental Framework ::: Evaluation Tasks",
"Experimental Framework ::: Implementation",
"Results and Discussion",
"Analysis",
"Conclusion and Future Work"
]
} | {
"answers": [
{
"annotation_id": [
"523c10e6b2035a39af1ce32310ff7b444f1126f1",
"eb1efe2c87183de8a681dabaac7ef49266c3a31e"
],
"answer": [
{
"evidence": [
"Table TABREF17 shows phone classification and speaker recognition results for the three model configurations: the VAE baseline, the CJFS encoder and the CJFA encoder. In our experiments the window size was set to 30 frames, namely 10 frames for the target and 10 frames for left and right neighbours, and an embedding dimension of 150. This was used for both CJFS and CJFA models alike. Results show that the CJFA encoder obtains significantly better phone classification accuracy than the VAE baseline and also than the CJFS encoder. These results are replicated for speaker recognition tasks. The CJFA encoder performs better on all tasks than the VAE baseline by a significant margin. It is noteworthy that performance on Task b is generally significantly lower than for Task a, for reasons of training overlap but also smaller training set sizes."
],
"extractive_spans": [
"CJFA encoder "
],
"free_form_answer": "",
"highlighted_evidence": [
"Results show that the CJFA encoder obtains significantly better phone classification accuracy than the VAE baseline and also than the CJFS encoder. These results are replicated for speaker recognition tasks."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Table TABREF17 shows phone classification and speaker recognition results for the three model configurations: the VAE baseline, the CJFS encoder and the CJFA encoder. In our experiments the window size was set to 30 frames, namely 10 frames for the target and 10 frames for left and right neighbours, and an embedding dimension of 150. This was used for both CJFS and CJFA models alike. Results show that the CJFA encoder obtains significantly better phone classification accuracy than the VAE baseline and also than the CJFS encoder. These results are replicated for speaker recognition tasks. The CJFA encoder performs better on all tasks than the VAE baseline by a significant margin. It is noteworthy that performance on Task b is generally significantly lower than for Task a, for reasons of training overlap but also smaller training set sizes."
],
"extractive_spans": [
"CJFA encoder"
],
"free_form_answer": "",
"highlighted_evidence": [
"Results show that the CJFA encoder obtains significantly better phone classification accuracy than the VAE baseline and also than the CJFS encoder. These results are replicated for speaker recognition tasks."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"ba7a565b186a2b1464815ac25bafc8131187670d",
"ce5fefd5eef468635b6773172c7274206916510c"
],
"answer": [
{
"evidence": [
"Work on VAE in BIBREF17 to learn acoustic embeddings conducted experiments using the TIMIT data set. In particular the tasks of phone classification and speaker recognition where chosen. As work here is an extension of such work we we follow the experimentation, however with significant extensions (see Section SECREF13). With guidance from the authors of the original workBIBREF17 our own implementation of VAE was created and compared with the published performance - yielding near identical results. This implementation then was also used as the basis for CJFS and CJFA, as introduced in § SECREF6.",
"For the assessment of embedded vector quality our work also follows the same task types, namely phone classification and speaker recognition (details in §SECREF13), with identical task implementations as in the reference paper. It is important to note that phone classification differs from the widely reported phone recognition experiments on TIMIT. Classification uses phone boundaries which are assumed to be known. However, no contextual information is available, which is typically used in the recognition setups, by means of triphone models, or bigram language models. Therefore the task is often more difficult than recognition. The baseline performance for VAE based phone classification experiments in BIBREF17 report an accuracy of 72.2%. The re-implementation forming the basis for our work gave an accuracy of 72.0%, a result that was considered to provide a credible basis for further work."
],
"extractive_spans": [
"VAE"
],
"free_form_answer": "",
"highlighted_evidence": [
"Work on VAE in BIBREF17 to learn acoustic embeddings conducted experiments using the TIMIT data set.",
"The baseline performance for VAE based phone classification experiments in BIBREF17 report an accuracy of 72.2%. The re-implementation forming the basis for our work gave an accuracy of 72.0%, a result that was considered to provide a credible basis for further work."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"For the assessment of embedded vector quality our work also follows the same task types, namely phone classification and speaker recognition (details in §SECREF13), with identical task implementations as in the reference paper. It is important to note that phone classification differs from the widely reported phone recognition experiments on TIMIT. Classification uses phone boundaries which are assumed to be known. However, no contextual information is available, which is typically used in the recognition setups, by means of triphone models, or bigram language models. Therefore the task is often more difficult than recognition. The baseline performance for VAE based phone classification experiments in BIBREF17 report an accuracy of 72.2%. The re-implementation forming the basis for our work gave an accuracy of 72.0%, a result that was considered to provide a credible basis for further work."
],
"extractive_spans": [
"VAE based phone classification"
],
"free_form_answer": "",
"highlighted_evidence": [
"The baseline performance for VAE based phone classification experiments in BIBREF17 report an accuracy of 72.2%. The re-implementation forming the basis for our work gave an accuracy of 72.0%, a result that was considered to provide a credible basis for further work."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"18340a3e9da5380d839aae4b6715a52e54335789",
"7afec0ba988a067db5983d341856cc1e1c50b473"
],
"answer": [
{
"evidence": [
"In order to achieve these configuration the TIMIT data was split. Fig. FIGREF12 illustrates the split of the data into 8 subsets (A–H). The TIMIT dataset contains speech from 462 speakers in training and 168 speakers in the test set, with 8 utterances for each speaker. The TIMIT training and test set are split into 8 blocks, where each block contains 2 utterances per speaker, randomly chosen. Thus each block A,B,C,D contains data from 462 speakers with 924 utterances taken from the training sets, and each block E,F,G,H contains speech from 168 test set speakers with 336 utterances.",
"For Task a training of embeddings and the classifier is identical, namely consisting of data from blocks (A+B+C+E+F+G). The test data is the remainder, namely blocks (D+H). For Task b the training of embeddings and classifiers uses (A+B+E+F) and (C+G) respectively, while again using (D+H) for test. Task c keeps both separate: embeddings are trained on (A+B+C+D), classifiers on (E+G) and tests are conducted on (F+H). Note that H is part of all tasks, and that Task c is considerably easier as the number of speakers to separate is only 168, although training conditions are more difficult."
],
"extractive_spans": [],
"free_form_answer": "Once split into 8 subsets (A-H), the test set used are blocks D+H and blocks F+H",
"highlighted_evidence": [
"In order to achieve these configuration the TIMIT data was split. Fig. FIGREF12 illustrates the split of the data into 8 subsets (A–H). The TIMIT dataset contains speech from 462 speakers in training and 168 speakers in the test set, with 8 utterances for each speaker. The TIMIT training and test set are split into 8 blocks, where each block contains 2 utterances per speaker, randomly chosen. Thus each block A,B,C,D contains data from 462 speakers with 924 utterances taken from the training sets, and each block E,F,G,H contains speech from 168 test set speakers with 336 utterances.\n\nFor Task a training of embeddings and the classifier is identical, namely consisting of data from blocks (A+B+C+E+F+G). The test data is the remainder, namely blocks (D+H). For Task b the training of embeddings and classifiers uses (A+B+E+F) and (C+G) respectively, while again using (D+H) for test. Task c keeps both separate: embeddings are trained on (A+B+C+D), classifiers on (E+G) and tests are conducted on (F+H). Note that H is part of all tasks, and that Task c is considerably easier as the number of speakers to separate is only 168, although training conditions are more difficult."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Taking the VAE experiments as baseline, the TIMIT data is used for this workBIBREF25. TIMIT contains studio recordings from a large number of speakers with detailed phoneme segment information. Work in this paper makes use of the official training and test sets, covering in total 630 speakers with 8 utterances each. There is no speaker overlap between training and test set, which comprise of 462 and 168 speakers, respectively. All work presented here use of 80 dimensional Mel-scale filter bank coefficients."
],
"extractive_spans": [
" this paper makes use of the official training and test sets, covering in total 630 speakers with 8 utterances each"
],
"free_form_answer": "",
"highlighted_evidence": [
"Taking the VAE experiments as baseline, the TIMIT data is used for this workBIBREF25. TIMIT contains studio recordings from a large number of speakers with detailed phoneme segment information. Work in this paper makes use of the official training and test sets, covering in total 630 speakers with 8 utterances each."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"05debde586693e820135aacf5ce5c2d3fb415bc9"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"zero",
"zero",
"zero",
"zero"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"Which approach out of two proposed in the paper performed better in experiments?",
"What classification baselines are used for comparison?",
"What TIMIT datasets are used for testing?",
"How does this approach compares to the state-of-the-art results on these tasks?"
],
"question_id": [
"c6a0b9b5dabcefda0233320dd1548518a0ae758e",
"1e185a3b8cac1da939427b55bf1ba7e768c5dae4",
"26e2d4d0e482e6963a76760323b8e1c26b6eee91",
"b80a3fbeb49a8968e149955bdcf199556478eeff"
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Fig. 1. The architecture of CJFS (Left) and CJFA (Right). Both CJFS and CJFA were built based on variational auto encoder, embeddings were extracted on the bottleneck layer.",
"Fig. 2. Data split of the TIMIT corpus for definition of data sets for speaker recognition. Training and test sets are split int 4 parts of 2 utterances each. Different combination of sets for training and test are used of different tasks.",
"Table 1. Definition of training configurations a, b, and c.",
"Fig. 3. Phone classification accuracy, and speaker recognition accuracy for Tasks a,b,and c (as defined at 4.3), when varying the embedding dimension (top row), and window sizes (bottom row).",
"Table 2. % Phone classification and speaker recognition accuracy with three different model types. Embedding dimension is 150 and target window size is 10 frames, neighbour window sizes are 10 frames each.",
"Fig. 4. The t-SNE visualisation of phones in the test set for three models: (a): VAE baseline (b): CJFS encoder and (c): CJFA encoder",
"Table 3. % Phone classification and speaker recognition accuracy with TIMIT data and RM data."
],
"file": [
"3-Figure1-1.png",
"4-Figure2-1.png",
"4-Table1-1.png",
"5-Figure3-1.png",
"5-Table2-1.png",
"6-Figure4-1.png",
"6-Table3-1.png"
]
} | [
"What TIMIT datasets are used for testing?"
] | [
[
"1910.07601-Experimental Framework ::: Data and Use-0",
"1910.07601-Experimental Framework ::: Evaluation Tasks-2",
"1910.07601-Experimental Framework ::: Evaluation Tasks-3"
]
] | [
"Once split into 8 subsets (A-H), the test set used are blocks D+H and blocks F+H"
] | 143 |
1909.00175 | Joint Detection and Location of English Puns | A pun is a form of wordplay for an intended humorous or rhetorical effect, where a word suggests two or more meanings by exploiting polysemy (homographic pun) or phonological similarity to another word (heterographic pun). This paper presents an approach that addresses pun detection and pun location jointly from a sequence labeling perspective. We employ a new tagging scheme such that the model is capable of performing such a joint task, where useful structural information can be properly captured. We show that our proposed model is effective in handling both homographic and heterographic puns. Empirical results on the benchmark datasets demonstrate that our approach can achieve new state-of-the-art results. | {
"paragraphs": [
[
"There exists a class of language construction known as pun in natural language texts and utterances, where a certain word or other lexical items are used to exploit two or more separate meanings. It has been shown that understanding of puns is an important research question with various real-world applications, such as human-computer interaction BIBREF0 , BIBREF1 and machine translation BIBREF2 . Recently, many researchers show their interests in studying puns, like detecting pun sentences BIBREF3 , locating puns in the text BIBREF4 , interpreting pun sentences BIBREF5 and generating sentences containing puns BIBREF6 , BIBREF7 , BIBREF8 . A pun is a wordplay in which a certain word suggests two or more meanings by exploiting polysemy, homonymy, or phonological similarity to another sign, for an intended humorous or rhetorical effect. Puns can be generally categorized into two groups, namely heterographic puns (where the pun and its latent target are phonologically similar) and homographic puns (where the two meanings of the pun reflect its two distinct senses) BIBREF9 . Consider the following two examples:",
"The first punning joke exploits the sound similarity between the word “propane\" and the latent target “profane\", which can be categorized into the group of heterographic puns. Another categorization of English puns is homographic pun, exemplified by the second instance leveraging distinct senses of the word “gut\".",
"Pun detection is the task of detecting whether there is a pun residing in the given text. The goal of pun location is to find the exact word appearing in the text that implies more than one meanings. Most previous work addresses such two tasks separately and develop separate systems BIBREF10 , BIBREF5 . Typically, a system for pun detection is built to make a binary prediction on whether a sentence contains a pun or not, where all instances (with or without puns) are taken into account during training. For the task of pun location, a separate system is used to make a single prediction as to which word in the given sentence in the text that trigger more than one semantic interpretations of the text, where the training data involves only sentences that contain a pun. Therefore, if one is interested in solving both problems at the same time, a pipeline approach that performs pun detection followed by pun location can be used.",
"Compared to the pipeline methods, joint learning has been shown effective BIBREF11 , BIBREF12 since it is able to reduce error propagation and allows information exchange between tasks which is potentially beneficial to all the tasks. In this work, we demonstrate that the detection and location of puns can be jointly addressed by a single model. The pun detection and location tasks can be combined as a sequence labeling problem, which allows us to jointly detect and locate a pun in a sentence by assigning each word a tag. Since each context contains a maximum of one pun BIBREF9 , we design a novel tagging scheme to capture this structural constraint. Statistics on the corpora also show that a pun tends to appear in the second half of a context. To capture such a structural property, we also incorporate word position knowledge into our structured prediction model. Experiments on the benchmark datasets show that detection and location tasks can reinforce each other, leading to new state-of-the-art performance on these two tasks. To the best of our knowledge, this is the first work that performs joint detection and location of English puns by using a sequence labeling approach."
],
[
"We first design a simple tagging scheme consisting of two tags { INLINEFORM0 }:",
" INLINEFORM0 tag means the current word is not a pun.",
" INLINEFORM0 tag means the current word is a pun.",
"If the tag sequence of a sentence contains a INLINEFORM0 tag, then the text contains a pun and the word corresponding to INLINEFORM1 is the pun.",
"The contexts have the characteristic that each context contains a maximum of one pun BIBREF9 . In other words, there exists only one pun if the given sentence is detected as the one containing a pun. Otherwise, there is no pun residing in the text. To capture this interesting property, we propose a new tagging scheme consisting of three tags, namely { INLINEFORM0 }.",
" INLINEFORM0 tag indicates that the current word appears before the pun in the given context.",
" INLINEFORM0 tag highlights the current word is a pun.",
" INLINEFORM0 tag indicates that the current word appears after the pun.",
"We empirically show that the INLINEFORM0 scheme can guarantee the context property that there exists a maximum of one pun residing in the text.",
"Given a context from the training set, we will be able to generate its corresponding gold tag sequence using a deterministic procedure. Under the two schemes, if a sentence does not contain any puns, all words will be tagged with INLINEFORM0 or INLINEFORM1 , respectively. Exemplified by the second sentence “Some diets cause a gut reaction,\" the pun is given as “gut.\" Thus, under the INLINEFORM2 scheme, it should be tagged with INLINEFORM3 , while the words before it are assigned with the tag INLINEFORM4 and words after it are with INLINEFORM5 , as illustrated in Figure FIGREF8 . Likewise, the INLINEFORM6 scheme tags the word “gut\" with INLINEFORM7 , while other words are tagged with INLINEFORM8 . Therefore, we can combine the pun detection and location tasks into one problem which can be solved by the sequence labeling approach."
],
[
"Neural models have shown their effectiveness on sequence labeling tasks BIBREF13 , BIBREF14 , BIBREF15 . In this work, we adopt the bidirectional Long Short Term Memory (BiLSTM) BIBREF16 networks on top of the Conditional Random Fields BIBREF17 (CRF) architecture to make labeling decisions, which is one of the classical models for sequence labeling. Our model architecture is illustrated in Figure FIGREF8 with a running example. Given a context/sentence INLINEFORM0 where INLINEFORM1 is the length of the context, we generate the corresponding tag sequence INLINEFORM2 based on our designed tagging schemes and the original annotations for pun detection and location provided by the corpora. Our model is then trained on pairs of INLINEFORM3 .",
"Input. The contexts in the pun corpus hold the property that each pun contains exactly one content word, which can be either a noun, a verb, an adjective, or an adverb. To capture this characteristic, we consider lexical features at the character level. Similar to the work of BIBREF15 , the character embeddings are trained by the character-level LSTM networks on the unannotated input sequences. Nonlinear transformations are then applied to the character embeddings by highway networks BIBREF18 , which map the character-level features into different semantic spaces.",
"We also observe that a pun tends to appear at the end of a sentence. Specifically, based on the statistics, we found that sentences with a pun that locate at the second half of the text account for around 88% and 92% in homographic and heterographic datasets, respectively. We thus introduce a binary feature that indicates if a word is located at the first or the second half of an input sentence to capture such positional information. A binary indicator can be mapped to a vector representation using a randomly initialized embedding table BIBREF19 , BIBREF20 . In this work, we directly adopt the value of the binary indicator as part of the input.",
"The concatenation of the transformed character embeddings, the pre-trained word embeddings BIBREF21 , and the position indicators are taken as input of our model.",
"Tagging. The input is then fed into a BiLSTM network, which will be able to capture contextual information. For a training instance INLINEFORM0 , we suppose the output by the word-level BiLSTM is INLINEFORM1 . The CRF layer is adopted to capture label dependencies and make final tagging decisions at each position, which has been included in many state-of-the-art sequence labeling models BIBREF14 , BIBREF15 . The conditional probability is defined as:",
"",
"",
"",
"where INLINEFORM0 is a set of all possible label sequences consisting of tags from INLINEFORM1 (or INLINEFORM2 ), INLINEFORM3 and INLINEFORM4 are weight and bias parameters corresponding to the label pair INLINEFORM5 . During training, we minimize the negative log-likelihood summed over all training instances:",
"",
"where INLINEFORM0 refers to the INLINEFORM1 -th instance in the training set. During testing, we aim to find the optimal label sequence for a new input INLINEFORM2 :",
"",
"This search process can be done efficiently using the Viterbi algorithm."
],
[
"We evaluate our model on two benchmark datasets BIBREF9 . The homographic dataset contains 2,250 contexts, 1,607 of which contain a pun. The heterographic dataset consists of 1,780 contexts with 1,271 containing a pun. We notice there is no standard splitting information provided for both datasets. Thus we apply 10-fold cross validation. To make direct comparisons with prior studies, following BIBREF4 , we accumulated the predictions for all ten folds and calculate the scores in the end.",
"For each fold, we randomly select 10% of the instances from the training set for development. Word embeddings are initialized with the 100-dimensional Glove BIBREF21 . The dimension of character embeddings is 30 and they are randomly initialized, which can be fine tuned during training. The pre-trained word embeddings are not updated during training. The dimensions of hidden vectors for both char-level and word-level LSTM units are set to 300. We adopt stochastic gradient descent (SGD) BIBREF26 with a learning rate of 0.015.",
"For the pun detection task, if the predicted tag sequence contains at least one INLINEFORM0 tag, we regard the output (i.e., the prediction of our pun detection model) for this task as true, otherwise false. For the pun location task, a predicted pun is regarded as correct if and only if it is labeled as the gold pun in the dataset. As to pun location, to make fair comparisons with prior studies, we only consider the instances that are labeled as the ones containing a pun. We report precision, recall and INLINEFORM1 score in Table TABREF11 . A list of prior works that did not employ joint learning are also shown in the first block of Table TABREF11 ."
],
[
"We also implemented a baseline model based on conditional random fields (CRF), where features like POS tags produced by the Stanford POS tagger BIBREF27 , n-grams, label transitions, word suffixes and relative position to the end of the text are considered. We can see that our model with the INLINEFORM0 tagging scheme yields new state-of-the-art INLINEFORM1 scores on pun detection and competitive results on pun location, compared to baselines that do not adopt joint learning in the first block. For location on heterographic puns, our model's performance is slightly lower than the system of BIBREF25 , which is a rule-based locator. Compared to CRF, we can see that our model, either with the INLINEFORM2 or the INLINEFORM3 scheme, yields significantly higher recall on both detection and location tasks, while the precisions are relatively close. This demonstrates the effectiveness of BiLSTM, which learns the contextual features of given texts – such information appears to be helpful in recalling more puns.",
"Compared to the INLINEFORM0 scheme, the INLINEFORM1 tagging scheme is able to yield better performance on these two tasks. After studying outputs from these two approaches, we found that one leading source of error for the INLINEFORM2 approach is that there exist more than one words in a single instance that are assigned with the INLINEFORM3 tag. However, according to the description of pun in BIBREF9 , each context contains a maximum of one pun. Thus, such a useful structural constraint is not well captured by the simple approach based on the INLINEFORM4 tagging scheme. On the other hand, by applying the INLINEFORM5 tagging scheme, such a constraint is properly captured in the model. As a result, the results for such a approach are significantly better than the approach based on the INLINEFORM6 tagging scheme, as we can observe from the table. Under the same experimental setup, we also attempted to exclude word position features. Results are given by INLINEFORM7 - INLINEFORM8 . It is expected that the performance of pun location drops, since such position features are able to capture the interesting property that a pun tends to appear in the second half of a sentence. While such knowledge is helpful for the location task, interestingly, a model without position knowledge yields improved performance on the pun detection task. One possible reason is that detecting whether a sentence contains a pun is not concerned with such word position information.",
"Additionally, we conduct experiments over sentences containing a pun only, namely 1,607 and 1,271 instances from homographic and heterographic pun corpora separately. It can be regarded as a “pipeline” method where the classifier for pun detection is regarded as perfect. Following the prior work of BIBREF4 , we apply 10-fold cross validation. Since we are given that all input sentences contain a pun, we only report accumulated results on pun location, denoted as Pipeline in Table TABREF11 . Compared with our approaches, the performance of such an approach drops significantly. On the other hand, such a fact demonstrates that the two task, detection and location of puns, can reinforce each other. These figures demonstrate the effectiveness of our sequence labeling method to detect and locate English puns in a joint manner."
],
[
"We studied the outputs from our system and make some error analysis. We found the errors can be broadly categorized into several types, and we elaborate them here. 1) Low word coverage: since the corpora are relatively small, there exist many unseen words in the test set. Learning the representations of such unseen words is challenging, which affects the model's performance. Such errors contribute around 40% of the total errors made by our system. 2) Detection errors: we found many errors are due to the model's inability to make correct pun detection. Such inability harms both pun detection and pun location. Although our approach based on the INLINEFORM0 tagging scheme yields relatively higher scores on the detection task, we still found that 40% of the incorrectly predicted instances fall into this group. 3) Short sentences: we found it was challenging for our model to make correct predictions when the given text is short. Consider the example “Superglue! Tom rejoined,\" here the word rejoined is the corresponding pun. However, it would be challenging to figure out the pun with such limited contextual information."
],
[
"Most existing systems address pun detection and location separately. BIBREF22 applied word sense knowledge to conduct pun detection. BIBREF24 trained a bidirectional RNN classifier for detecting homographic puns. Next, a knowledge-based approach is adopted to find the exact pun. Such a system is not applicable to heterographic puns. BIBREF28 applied Google n-gram and word2vec to make decisions. The phonetic distance via the CMU Pronouncing Dictionary is computed to detect heterographic puns. BIBREF10 used the hidden Markov model and a cyclic dependency network with rich features to detect and locate puns. BIBREF23 used a supervised approach to pun detection and a weakly supervised approach to pun location based on the position within the context and part of speech features. BIBREF25 proposed a rule-based system for pun location that scores candidate words according to eleven simple heuristics. Two systems are developed to conduct detection and location separately in the system known as UWAV BIBREF3 . The pun detector combines predictions from three classifiers. The pun locator considers word2vec similarity between every pair of words in the context and position to pinpoint the pun. The state-of-the-art system for homographic pun location is a neural method BIBREF4 , where the word senses are incorporated into a bidirectional LSTM model. This method only supports the pun location task on homographic puns. Another line of research efforts related to this work is sequence labeling, such as POS tagging, chunking, word segmentation and NER. The neural methods have shown their effectiveness in this task, such as BiLSTM-CNN BIBREF13 , GRNN BIBREF29 , LSTM-CRF BIBREF30 , LSTM-CNN-CRF BIBREF14 , LM-LSTM-CRF BIBREF15 .",
"In this work, we combine pun detection and location tasks as a single sequence labeling problem. Inspired by the work of BIBREF15 , we also adopt a LSTM-CRF with character embeddings to make labeling decisions."
],
[
"In this paper, we propose to perform pun detection and location tasks in a joint manner from a sequence labeling perspective. We observe that each text in our corpora contains a maximum of one pun. Hence, we design a novel tagging scheme to incorporate such a constraint. Such a scheme guarantees that there is a maximum of one word that will be tagged as a pun during the testing phase. We also found the interesting structural property such as the fact that most puns tend to appear at the second half of the sentences can be helpful for such a task, but was not explored in previous works. Furthermore, unlike many previous approaches, our approach, though simple, is generally applicable to both heterographic and homographic puns. Empirical results on the benchmark datasets prove the effectiveness of the proposed approach that the two tasks of pun detection and location can be addressed by a single model from a sequence labeling perspective.",
"Future research includes the investigations on how to make use of richer semantic and linguistic information for detection and location of puns. Research on puns for other languages such as Chinese is still under-explored, which could also be an interesting direction for our future studies."
],
[
"We would like to thank the three anonymous reviewers for their thoughtful and constructive comments. This work is supported by Singapore Ministry of Education Academic Research Fund (AcRF) Tier 2 Project MOE2017-T2-1-156, and is partially supported by SUTD project PIE-SGP-AI-2018-01."
]
],
"section_name": [
"Introduction",
"Problem Definition",
"Model",
"Datasets and Settings",
"Results",
"Error Analysis",
"Related Work",
"Conclusion",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"0824fa4027c59e619070ee165d5e8130b4778452",
"0eb1d27fcba90996b7349c86227ca2c4a1de172d"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 1: Comparison results on two benchmark datasets. (P.: Precision, R.: Recall, F1: F1 score.)"
],
"extractive_spans": [],
"free_form_answer": "F1 score of 92.19 on homographic pun detection, 80.19 on homographic pun location, 89.76 on heterographic pun detection.",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Comparison results on two benchmark datasets. (P.: Precision, R.: Recall, F1: F1 score.)"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 1: Comparison results on two benchmark datasets. (P.: Precision, R.: Recall, F1: F1 score.)"
],
"extractive_spans": [],
"free_form_answer": "for the homographic dataset F1 score of 92.19 and 80.19 on detection and location and for the heterographic dataset F1 score of 89.76 on detection",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Comparison results on two benchmark datasets. (P.: Precision, R.: Recall, F1: F1 score.)"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"3ccd072a8a04adc02e5c41da79e89ec925332e96"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 1: Comparison results on two benchmark datasets. (P.: Precision, R.: Recall, F1: F1 score.)"
],
"extractive_spans": [],
"free_form_answer": "They compare with the following models: by Pedersen (2017), by Pramanick and Das (2017), by Mikhalkova and Karyakin (2017), by Vadehra (2017), Indurthi and Oota (2017), by Vechtomova (2017), by (Cai et al., 2018), and CRF.",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Comparison results on two benchmark datasets. (P.: Precision, R.: Recall, F1: F1 score.)"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"8cff4e8e26cb734a751f434df6b14db60b40c63b",
"cb95e23cf1c150813e18193d0f21b9fd649361ea"
],
"answer": [
{
"evidence": [
"We evaluate our model on two benchmark datasets BIBREF9 . The homographic dataset contains 2,250 contexts, 1,607 of which contain a pun. The heterographic dataset consists of 1,780 contexts with 1,271 containing a pun. We notice there is no standard splitting information provided for both datasets. Thus we apply 10-fold cross validation. To make direct comparisons with prior studies, following BIBREF4 , we accumulated the predictions for all ten folds and calculate the scores in the end."
],
"extractive_spans": [
"The homographic dataset contains 2,250 contexts, 1,607 of which contain a pun. The heterographic dataset consists of 1,780 contexts with 1,271 containing a pun."
],
"free_form_answer": "",
"highlighted_evidence": [
"We evaluate our model on two benchmark datasets BIBREF9 . The homographic dataset contains 2,250 contexts, 1,607 of which contain a pun. The heterographic dataset consists of 1,780 contexts with 1,271 containing a pun. We notice there is no standard splitting information provided for both datasets. Thus we apply 10-fold cross validation. To make direct comparisons with prior studies, following BIBREF4 , we accumulated the predictions for all ten folds and calculate the scores in the end."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We evaluate our model on two benchmark datasets BIBREF9 . The homographic dataset contains 2,250 contexts, 1,607 of which contain a pun. The heterographic dataset consists of 1,780 contexts with 1,271 containing a pun. We notice there is no standard splitting information provided for both datasets. Thus we apply 10-fold cross validation. To make direct comparisons with prior studies, following BIBREF4 , we accumulated the predictions for all ten folds and calculate the scores in the end."
],
"extractive_spans": [],
"free_form_answer": "A homographic and heterographic benchmark datasets by BIBREF9.",
"highlighted_evidence": [
"We evaluate our model on two benchmark datasets BIBREF9 . The homographic dataset contains 2,250 contexts, 1,607 of which contain a pun. The heterographic dataset consists of 1,780 contexts with 1,271 containing a pun. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"351dc84a89b28cd23777a9efab36ecd5921b810b",
"c030b762727732384d224e57da1c18e52efba8bd"
],
"answer": [
{
"evidence": [
"The contexts have the characteristic that each context contains a maximum of one pun BIBREF9 . In other words, there exists only one pun if the given sentence is detected as the one containing a pun. Otherwise, there is no pun residing in the text. To capture this interesting property, we propose a new tagging scheme consisting of three tags, namely { INLINEFORM0 }.",
"INLINEFORM0 tag indicates that the current word appears before the pun in the given context.",
"INLINEFORM0 tag highlights the current word is a pun.",
"INLINEFORM0 tag indicates that the current word appears after the pun."
],
"extractive_spans": [],
"free_form_answer": "A new tagging scheme that tags the words before and after the pun as well as the pun words.",
"highlighted_evidence": [
"To capture this interesting property, we propose a new tagging scheme consisting of three tags, namely { INLINEFORM0 }.\n\nINLINEFORM0 tag indicates that the current word appears before the pun in the given context.\n\nINLINEFORM0 tag highlights the current word is a pun.\n\nINLINEFORM0 tag indicates that the current word appears after the pun."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The contexts have the characteristic that each context contains a maximum of one pun BIBREF9 . In other words, there exists only one pun if the given sentence is detected as the one containing a pun. Otherwise, there is no pun residing in the text. To capture this interesting property, we propose a new tagging scheme consisting of three tags, namely { INLINEFORM0 }.",
"INLINEFORM0 tag indicates that the current word appears before the pun in the given context.",
"INLINEFORM0 tag highlights the current word is a pun.",
"INLINEFORM0 tag indicates that the current word appears after the pun.",
"We empirically show that the INLINEFORM0 scheme can guarantee the context property that there exists a maximum of one pun residing in the text."
],
"extractive_spans": [
"a new tagging scheme consisting of three tags, namely { INLINEFORM0 }"
],
"free_form_answer": "",
"highlighted_evidence": [
"The contexts have the characteristic that each context contains a maximum of one pun BIBREF9 . In other words, there exists only one pun if the given sentence is detected as the one containing a pun. Otherwise, there is no pun residing in the text. To capture this interesting property, we propose a new tagging scheme consisting of three tags, namely { INLINEFORM0 }.\n\nINLINEFORM0 tag indicates that the current word appears before the pun in the given context.\n\nINLINEFORM0 tag highlights the current word is a pun.\n\nINLINEFORM0 tag indicates that the current word appears after the pun.\n\nWe empirically show that the INLINEFORM0 scheme can guarantee the context property that there exists a maximum of one pun residing in the text."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
],
"nlp_background": [
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
""
],
"question": [
"What state-of-the-art results are achieved?",
"What baselines do they compare with?",
"What datasets are used in evaluation?",
"What is the tagging scheme employed?"
],
"question_id": [
"badc9db40adbbf2ea7bac29f2e4e3b6b9175b1f9",
"67b66fe67a3cb2ce043070513664203e564bdcbd",
"f56d07f73b31a9c72ea737b40103d7004ef6a079",
"38e4aaeabf06a63a067b272f8950116733a7895c"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
""
]
} | {
"caption": [
"Figure 1: Model architecture",
"Table 1: Comparison results on two benchmark datasets. (P.: Precision, R.: Recall, F1: F1 score.)"
],
"file": [
"2-Figure1-1.png",
"4-Table1-1.png"
]
} | [
"What state-of-the-art results are achieved?",
"What baselines do they compare with?",
"What datasets are used in evaluation?",
"What is the tagging scheme employed?"
] | [
[
"1909.00175-4-Table1-1.png"
],
[
"1909.00175-4-Table1-1.png"
],
[
"1909.00175-Datasets and Settings-0"
],
[
"1909.00175-Problem Definition-8",
"1909.00175-Problem Definition-4"
]
] | [
"for the homographic dataset F1 score of 92.19 and 80.19 on detection and location and for the heterographic dataset F1 score of 89.76 on detection",
"They compare with the following models: by Pedersen (2017), by Pramanick and Das (2017), by Mikhalkova and Karyakin (2017), by Vadehra (2017), Indurthi and Oota (2017), by Vechtomova (2017), by (Cai et al., 2018), and CRF.",
"A homographic and heterographic benchmark datasets by BIBREF9.",
"A new tagging scheme that tags the words before and after the pun as well as the pun words."
] | 144 |
1910.06036 | Improving Question Generation With to the Point Context | Question generation (QG) is the task of generating a question from a reference sentence and a specified answer within the sentence. A major challenge in QG is to identify answer-relevant context words to finish the declarative-to-interrogative sentence transformation. Existing sequence-to-sequence neural models achieve this goal by proximity-based answer position encoding under the intuition that neighboring words of answers are of high possibility to be answer-relevant. However, such intuition may not apply to all cases especially for sentences with complex answer-relevant relations. Consequently, the performance of these models drops sharply when the relative distance between the answer fragment and other non-stop sentence words that also appear in the ground truth question increases. To address this issue, we propose a method to jointly model the unstructured sentence and the structured answer-relevant relation (extracted from the sentence in advance) for question generation. Specifically, the structured answer-relevant relation acts as the to the point context and it thus naturally helps keep the generated question to the point, while the unstructured sentence provides the full information. Extensive experiments show that to the point context helps our question generation model achieve significant improvements on several automatic evaluation metrics. Furthermore, our model is capable of generating diverse questions for a sentence which conveys multiple relations of its answer fragment. | {
"paragraphs": [
[
"Question Generation (QG) is the task of automatically creating questions from a range of inputs, such as natural language text BIBREF0, knowledge base BIBREF1 and image BIBREF2. QG is an increasingly important area in NLP with various application scenarios such as intelligence tutor systems, open-domain chatbots and question answering dataset construction. In this paper, we focus on question generation from reading comprehension materials like SQuAD BIBREF3. As shown in Figure FIGREF1, given a sentence in the reading comprehension paragraph and the text fragment (i.e., the answer) that we want to ask about, we aim to generate a question that is asked about the specified answer.",
"Question generation for reading comprehension is firstly formalized as a declarative-to-interrogative sentence transformation problem with predefined rules or templates BIBREF4, BIBREF0. With the rise of neural models, Du2017LearningTA propose to model this task under the sequence-to-sequence (Seq2Seq) learning framework BIBREF5 with attention mechanism BIBREF6. However, question generation is a one-to-many sequence generation problem, i.e., several aspects can be asked given a sentence. Zhou2017NeuralQG propose the answer-aware question generation setting which assumes the answer, a contiguous span inside the input sentence, is already known before question generation. To capture answer-relevant words in the sentence, they adopt a BIO tagging scheme to incorporate the answer position embedding in Seq2Seq learning. Furthermore, Sun2018AnswerfocusedAP propose that tokens close to the answer fragments are more likely to be answer-relevant. Therefore, they explicitly encode the relative distance between sentence words and the answer via position embedding and position-aware attention.",
"Although existing proximity-based answer-aware approaches achieve reasonable performance, we argue that such intuition may not apply to all cases especially for sentences with complex structure. For example, Figure FIGREF1 shows such an example where those approaches fail. This sentence contains a few facts and due to the parenthesis (i.e. “the area's coldest month”), some facts intertwine: “The daily mean temperature in January is 0.3$^\\circ $C” and “January is the area's coldest month”. From the question generated by a proximity-based answer-aware baseline, we find that it wrongly uses the word “coldest” but misses the correct word “mean” because “coldest” has a shorter distance to the answer “0.3$^\\circ $C”.",
"In summary, their intuition that “the neighboring words of the answer are more likely to be answer-relevant and have a higher chance to be used in the question” is not reliable. To quantitatively show this drawback of these models, we implement the approach proposed by Sun2018AnswerfocusedAP and analyze its performance under different relative distances between the answer and other non-stop sentence words that also appear in the ground truth question. The results are shown in Table TABREF2. We find that the performance drops at most 36% when the relative distance increases from “$0\\sim 10$” to “$>10$”. In other words, when the useful context is located far away from the answer, current proximity-based answer-aware approaches will become less effective, since they overly emphasize neighboring words of the answer.",
"To address this issue, we extract the structured answer-relevant relations from sentences and propose a method to jointly model such structured relation and the unstructured sentence for question generation. The structured answer-relevant relation is likely to be to the point context and thus can help keep the generated question to the point. For example, Figure FIGREF1 shows our framework can extract the right answer-relevant relation (“The daily mean temperature in January”, “is”, “32.6$^\\circ $F (0.3$^\\circ $C)”) among multiple facts. With the help of such structured information, our model is less likely to be confused by sentences with a complex structure. Specifically, we firstly extract multiple relations with an off-the-shelf Open Information Extraction (OpenIE) toolbox BIBREF7, then we select the relation that is most relevant to the answer with carefully designed heuristic rules.",
"Nevertheless, it is challenging to train a model to effectively utilize both the unstructured sentence and the structured answer-relevant relation because both of them could be noisy: the unstructured sentence may contain multiple facts which are irrelevant to the target question, while the limitation of the OpenIE tool may produce less accurate extracted relations. To explore their advantages simultaneously and avoid the drawbacks, we design a gated attention mechanism and a dual copy mechanism based on the encoder-decoder framework, where the former learns to control the information flow between the unstructured and structured inputs, while the latter learns to copy words from two sources to maintain the informativeness and faithfulness of generated questions.",
"In the evaluations on the SQuAD dataset, our system achieves significant and consistent improvement as compared to all baseline methods. In particular, we demonstrate that the improvement is more significant with a larger relative distance between the answer and other non-stop sentence words that also appear in the ground truth question. Furthermore, our model is capable of generating diverse questions for a single sentence-answer pair where the sentence conveys multiple relations of its answer fragment."
],
[
"In this section, we first introduce the task definition and our protocol to extract structured answer-relevant relations. Then we formalize the task under the encoder-decoder framework with gated attention and dual copy mechanism."
],
[
"We formalize our task as an answer-aware Question Generation (QG) problem BIBREF8, which assumes answer phrases are given before generating questions. Moreover, answer phrases are shown as text fragments in passages. Formally, given the sentence $S$, the answer $A$, and the answer-relevant relation $M$, the task of QG aims to find the best question $\\overline{Q}$ such that,",
"where $A$ is a contiguous span inside $S$."
],
[
"We utilize an off-the-shelf toolbox of OpenIE to the derive structured answer-relevant relations from sentences as to the point contexts. Relations extracted by OpenIE can be represented either in a triple format or in an n-ary format with several secondary arguments, and we employ the latter to keep the extractions as informative as possible and avoid extracting too many similar relations in different granularities from one sentence. We join all arguments in the extracted n-ary relation into a sequence as our to the point context. Figure FIGREF5 shows n-ary relations extracted from OpenIE. As we can see, OpenIE extracts multiple relations for complex sentences. Here we select the most informative relation according to three criteria in the order of descending importance: (1) having the maximal number of overlapped tokens between the answer and the relation; (2) being assigned the highest confidence score by OpenIE; (3) containing maximum non-stop words. As shown in Figure FIGREF5, our criteria can select answer-relevant relations (waved in Figure FIGREF5), which is especially useful for sentences with extraneous information. In rare cases, OpenIE cannot extract any relation, we treat the sentence itself as the to the point context.",
"Table TABREF8 shows some statistics to verify the intuition that the extracted relations can serve as more to the point context. We find that the tokens in relations are 61% more likely to be used in the target question than the tokens in sentences, and thus they are more to the point. On the other hand, on average the sentences contain one more question token than the relations (1.86 v.s. 2.87). Therefore, it is still necessary to take the original sentence into account to generate a more accurate question."
],
[
"As shown in Figure FIGREF10, our framework consists offour components (1) Sentence Encoder and Relation Encoder, (2) Decoder, (3) Gated Attention Mechanism and (4) Dual Copy Mechanism. The sentence encoder and relation encoder encode the unstructured sentence and the structured answer-relevant relation, respectively. To select and combine the source information from the two encoders, a gated attention mechanism is employed to jointly attend both contextualized information sources, and a dual copy mechanism copies words from either the sentence or the relation."
],
[
"We employ two encoders to integrate information from the unstructured sentence $S$ and the answer-relevant relation $M$ separately. Sentence encoder takes in feature-enriched embeddings including word embeddings $\\mathbf {w}$, linguistic embeddings $\\mathbf {l}$ and answer position embeddings $\\mathbf {a}$. We follow BIBREF9 to transform POS and NER tags into continuous representation ($\\mathbf {l}^p$ and $\\mathbf {l}^n$) and adopt a BIO labelling scheme to derive the answer position embedding (B: the first token of the answer, I: tokens within the answer fragment except the first one, O: tokens outside of the answer fragment). For each word $w_i$ in the sentence $S$, we simply concatenate all features as input: $\\mathbf {x}_i^s= [\\mathbf {w}_i; \\mathbf {l}^p_i; \\mathbf {l}^n_i; \\mathbf {a}_i]$. Here $[\\mathbf {a};\\mathbf {b}]$ denotes the concatenation of vectors $\\mathbf {a}$ and $\\mathbf {b}$.",
"We use bidirectional LSTMs to encode the sentence $(\\mathbf {x}_1^s, \\mathbf {x}_2^s, ..., \\mathbf {x}_n^s)$ to get a contextualized representation for each token:",
"where $\\overrightarrow{\\mathbf {h}}^{s}_i$ and $\\overleftarrow{\\mathbf {h}}^{s}_i$ are the hidden states at the $i$-th time step of the forward and the backward LSTMs. The output state of the sentence encoder is the concatenation of forward and backward hidden states: $\\mathbf {h}^{s}_i=[\\overrightarrow{\\mathbf {h}}^{s}_i;\\overleftarrow{\\mathbf {h}}^{s}_i]$. The contextualized representation of the sentence is $(\\mathbf {h}^{s}_1, \\mathbf {h}^{s}_2, ..., \\mathbf {h}^{s}_n)$.",
"For the relation encoder, we firstly join all items in the n-ary relation $M$ into a sequence. Then we only take answer position embedding as an extra feature for the sequence: $\\mathbf {x}_i^m= [\\mathbf {w}_i; \\mathbf {a}_i]$. Similarly, we take another bidirectional LSTMs to encode the relation sequence and derive the corresponding contextualized representation $(\\mathbf {h}^{m}_1, \\mathbf {h}^{m}_2, ..., \\mathbf {h}^{m}_n)$."
],
[
"We use an LSTM as the decoder to generate the question. The decoder predicts the word probability distribution at each decoding timestep to generate the question. At the t-th timestep, it reads the word embedding $\\mathbf {w}_{t}$ and the hidden state $\\mathbf {u}_{t-1}$ of the previous timestep to generate the current hidden state:"
],
[
"We design a gated attention mechanism to jointly attend the sentence representation and the relation representation. For sentence representation $(\\mathbf {h}^{s}_1, \\mathbf {h}^{s}_2, ..., \\mathbf {h}^{s}_n)$, we employ the Luong2015EffectiveAT's attention mechanism to obtain the sentence context vector $\\mathbf {c}^s_t$,",
"where $\\mathbf {W}_a$ is a trainable weight. Similarly, we obtain the vector $\\mathbf {c}^m_t$ from the relation representation $(\\mathbf {h}^{m}_1, \\mathbf {h}^{m}_2, ..., \\mathbf {h}^{m}_n)$. To jointly model the sentence and the relation, a gating mechanism is designed to control the information flow from two sources:",
"where $\\odot $ represents element-wise dot production and $\\mathbf {W}_g, \\mathbf {W}_h$ are trainable weights. Finally, the predicted probability distribution over the vocabulary $V$ is computed as:",
"where $\\mathbf {W}_V$ and $\\mathbf {b}_V$ are parameters."
],
[
"To deal with the rare and unknown words, the decoder applies the pointing method BIBREF10, BIBREF11, BIBREF12 to allow copying a token from the input sentence at the $t$-th decoding step. We reuse the attention score $\\mathbf {\\alpha }_{t}^s$ and $\\mathbf {\\alpha }_{t}^m$ to derive the copy probability over two source inputs:",
"Different from the standard pointing method, we design a dual copy mechanism to copy from two sources with two gates. The first gate is designed for determining copy tokens from two sources of inputs or generate next word from $P_V$, which is computed as $g^v_t = \\text{sigmoid}(\\mathbf {w}^v_g \\tilde{\\mathbf {h}}_t + b^v_g)$. The second gate takes charge of selecting the source (sentence or relation) to copy from, which is computed as $g^c_t = \\text{sigmoid}(\\mathbf {w}^c_g [\\mathbf {c}_t^s;\\mathbf {c}_t^m] + b^c_g)$. Finally, we combine all probabilities $P_V$, $P_S$ and $P_M$ through two soft gates $g^v_t$ and $g^c_t$. The probability of predicting $w$ as the $t$-th token of the question is:"
],
[
"Given the answer $A$, sentence $S$ and relation $M$, the training objective is to minimize the negative log-likelihood with regard to all parameters:",
"where $\\mathcal {\\lbrace }Q\\rbrace $ is the set of all training instances, $\\theta $ denotes model parameters and $\\text{log} P(Q|A,S,M;\\theta )$ is the conditional log-likelihood of $Q$.",
"In testing, our model targets to generate a question $Q$ by maximizing:"
],
[
"We conduct experiments on the SQuAD dataset BIBREF3. It contains 536 Wikipedia articles and 100k crowd-sourced question-answer pairs. The questions are written by crowd-workers and the answers are spans of tokens in the articles. We employ two different data splits by following Zhou2017NeuralQG and Du2017LearningTA . In Zhou2017NeuralQG, the original SQuAD development set is evenly divided into dev and test sets, while Du2017LearningTA treats SQuAD development set as its development set and splits original SQuAD training set into a training set and a test set. We also filter out questions which do not have any overlapped non-stop words with the corresponding sentences and perform some preprocessing steps, such as tokenization and sentence splitting. The data statistics are given in Table TABREF27.",
"We evaluate with all commonly-used metrics in question generation BIBREF13: BLEU-1 (B1), BLEU-2 (B2), BLEU-3 (B3), BLEU-4 (B4) BIBREF17, METEOR (MET) BIBREF18 and ROUGE-L (R-L) BIBREF19. We use the evaluation script released by Chen2015MicrosoftCC."
],
[
"We compare with the following models.",
"[leftmargin=*]",
"s2s BIBREF13 proposes an attention-based sequence-to-sequence neural network for question generation.",
"NQG++ BIBREF9 takes the answer position feature and linguistic features into consideration and equips the Seq2Seq model with copy mechanism.",
"M2S+cp BIBREF14 conducts multi-perspective matching between the answer and the sentence to derive an answer-aware sentence representation for question generation.",
"s2s+MP+GSA BIBREF8 introduces a gated self-attention into the encoder and a maxout pointer mechanism into the decoder. We report their sentence-level results for a fair comparison.",
"Hybrid BIBREF15 is a hybrid model which considers the answer embedding for the question word generation and the position of context words for modeling the relative distance between the context words and the answer.",
"ASs2s BIBREF16 replaces the answer in the sentence with a special token to avoid its appearance in the generated questions."
],
[
"We take the most frequent 20k words as our vocabulary and use the GloVe word embeddings BIBREF20 for initialization. The embedding dimensions for POS, NER, answer position are set to 20. We use two-layer LSTMs in both encoder and decoder, and the LSTMs hidden unit size is set to 600.",
"We use dropout BIBREF21 with the probability $p=0.3$. All trainable parameters, except word embeddings, are randomly initialized with the Xavier uniform in $(-0.1, 0.1)$ BIBREF22. For optimization in the training, we use SGD as the optimizer with a minibatch size of 64 and an initial learning rate of 1.0. We train the model for 15 epochs and start halving the learning rate after the 8th epoch. We set the gradient norm upper bound to 3 during the training.",
"We adopt the teacher-forcing for the training. In the testing, we select the model with the lowest perplexity and beam search with size 3 is employed for generating questions. All hyper-parameters and models are selected on the validation dataset."
],
[
"Table TABREF30 shows automatic evaluation results for our model and baselines (copied from their papers). Our proposed model which combines structured answer-relevant relations and unstructured sentences achieves significant improvements over proximity-based answer-aware models BIBREF9, BIBREF15 on both dataset splits. Presumably, our structured answer-relevant relation is a generalization of the context explored by the proximity-based methods because they can only capture short dependencies around answer fragments while our extractions can capture both short and long dependencies given the answer fragments. Moreover, our proposed framework is a general one to jointly leverage structured relations and unstructured sentences. All compared baseline models which only consider unstructured sentences can be further enhanced under our framework.",
"Recall that existing proximity-based answer-aware models perform poorly when the distance between the answer fragment and other non-stop sentence words that also appear in the ground truth question is large (Table TABREF2). Here we investigate whether our proposed model using the structured answer-relevant relations can alleviate this issue or not, by conducting experiments for our model under the same setting as in Table TABREF2. The broken-down performances by different relative distances are shown in Table TABREF40. We find that our proposed model outperforms Hybrid (our re-implemented version for this experiment) on all ranges of relative distances, which shows that the structured answer-relevant relations can capture both short and long term answer-relevant dependencies of the answer in sentences. Furthermore, comparing the performance difference between Hybrid and our model, we find the improvements become more significant when the distance increases from “$0\\sim 10$” to “$>10$”. One reason is that our model can extract relations with distant dependencies to the answer, which greatly helps our model ignore the extraneous information. Proximity-based answer-aware models may overly emphasize the neighboring words of answers and become less effective as the useful context becomes further away from the answer in the complex sentences. In fact, the breakdown intervals in Table TABREF40 naturally bound its sentence length, say for “$>10$”, the sentences in this group must be longer than 10. Thus, the length variances in these two intervals could be significant. To further validate whether our model can extract long term dependency words. We rerun the analysis of Table TABREF40 only for long sentences (length $>$ 20) of each interval. The improvement percentages over Hybrid are shown in Table TABREF40, which become more significant when the distance increases from “$0\\sim 10$” to “$>10$”."
],
[
"Figure FIGREF42 provides example questions generated by crowd-workers (ground truth questions), the baseline Hybrid BIBREF15, and our model. In the first case, there are two subsequences in the input and the answer has no relation with the second subsequence. However, we see that the baseline model prediction copies irrelevant words “The New York Times” while our model can avoid using the extraneous subsequence “The New York Times noted ...” with the help of the structured answer-relevant relation. Compared with the ground truth question, our model cannot capture the cross-sentence information like “her fifth album”, where the techniques in paragraph-level QG models BIBREF8 may help. In the second case, as discussed in Section SECREF1, this sentence contains a few facts and some facts intertwine. We find that our model can capture distant answer-relevant dependencies such as “mean temperature” while the proximity-based baseline model wrongly takes neighboring words of the answer like “coldest” in the generated question."
],
[
"Another interesting observation is that for the same answer-sentence pair, our model can generate diverse questions by taking different answer-relevant relations as input. Such capability improves the interpretability of our model because the model is given not only what to be asked (i.e., the answer) but also the related fact (i.e., the answer-relevant relation) to be covered in the question. In contrast, proximity-based answer-aware models can only generate one question given the sentence-answer pair regardless of how many answer-relevant relations in the sentence. We think such capability can also validate our motivation: questions should be generated according to the answer-aware relations instead of neighboring words of answer fragments. Figure FIGREF45 show two examples of diverse question generation. In the first case, the answer fragment `Hugh L. Dryden' is the appositive to `NASA Deputy Administrator' but the subject to the following tokens `announced the Apollo program ...'. Our framework can extract these two answer-relevant relations, and by feeding them to our model separately, we can receive two questions asking different relations with regard to the answer."
],
[
"The topic of question generation, initially motivated for educational purposes, is tackled by designing many complex rules for specific question types BIBREF4, BIBREF23. Heilman2010GoodQS improve rule-based question generation by introducing a statistical ranking model. First, they remove extraneous information in the sentence to transform it into a simpler one, which can be transformed easily into a succinct question with predefined sets of general rules. Then they adopt an overgenerate-and-rank approach to select the best candidate considering several features.",
"With the rise of dominant neural sequence-to-sequence learning models BIBREF5, Du2017LearningTA frame question generation as a sequence-to-sequence learning problem. Compared with rule-based approaches, neural models BIBREF24 can generate more fluent and grammatical questions. However, question generation is a one-to-many sequence generation problem, i.e., several aspects can be asked given a sentence, which confuses the model during train and prevents concrete automatic evaluation. To tackle this issue, Zhou2017NeuralQG propose the answer-aware question generation setting which assumes the answer is already known and acts as a contiguous span inside the input sentence. They adopt a BIO tagging scheme to incorporate the answer position information as learned embedding features in Seq2Seq learning. Song2018LeveragingCI explicitly model the information between answer and sentence with a multi-perspective matching model. Kim2019ImprovingNQ also focus on the answer information and proposed an answer-separated Seq2Seq model by masking the answer with special tokens. All answer-aware neural models treat question generation as a one-to-one mapping problem, but existing models perform poorly for sentences with a complex structure (as shown in Table TABREF2).",
"Our work is inspired by the process of extraneous information removing in BIBREF0, BIBREF25. Different from Heilman2010GoodQS which directly use the simplified sentence for generation and cao2018faithful which only consider aggregate two sources of information via gated attention in summarization, we propose to combine the structured answer-relevant relation and the original sentence. Factoid question generation from structured text is initially investigated by Serban2016GeneratingFQ, but our focus here is leveraging structured inputs to help question generation over unstructured sentences. Our proposed model can take advantage of unstructured sentences and structured answer-relevant relations to maintain informativeness and faithfulness of generated questions. The proposed model can also be generalized in other conditional sequence generation tasks which require multiple sources of inputs, e.g., distractor generation for multiple choice questions BIBREF26."
],
[
"In this paper, we propose a question generation system which combines unstructured sentences and structured answer-relevant relations for generation. The unstructured sentences maintain the informativeness of generated questions while structured answer-relevant relations keep the faithfulness of questions. Extensive experiments demonstrate that our proposed model achieves state-of-the-art performance across several metrics. Furthermore, our model can generate diverse questions with different structured answer-relevant relations. For future work, there are some interesting dimensions to explore, such as difficulty levels BIBREF27, paragraph-level information BIBREF8 and conversational question generation BIBREF28."
],
[
"This work is supported by the Research Grants Council of the Hong Kong Special Administrative Region, China (No. CUHK 14208815 and No. CUHK 14210717 of the General Research Fund). We would like to thank the anonymous reviewers for their comments. We would also like to thank Department of Computer Science and Engineering, The Chinese University of Hong Kong for the conference grant support."
]
],
"section_name": [
"Introduction",
"Framework Description",
"Framework Description ::: Problem Definition",
"Framework Description ::: Answer-relevant Relation Extraction",
"Framework Description ::: Our Proposed Model ::: Overview.",
"Framework Description ::: Our Proposed Model ::: Answer-aware Encoder.",
"Framework Description ::: Our Proposed Model ::: Decoder.",
"Framework Description ::: Our Proposed Model ::: Gated Attention Mechanism.",
"Framework Description ::: Our Proposed Model ::: Dual Copy Mechanism.",
"Framework Description ::: Our Proposed Model ::: Training and Inference.",
"Experimental Setting ::: Dataset & Metrics",
"Experimental Setting ::: Baseline Models",
"Experimental Setting ::: Implementation Details",
"Results and Analysis ::: Main Results",
"Results and Analysis ::: Case Study",
"Results and Analysis ::: Diverse Question Generation",
"Related Work",
"Conclusions and Future Work",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"0c6e38128ed95b43ec1074d5d11217e73de0c96d",
"54d3e0d431db6737beeb9eb53e8f941757de6790"
],
"answer": [
{
"evidence": [
"To address this issue, we extract the structured answer-relevant relations from sentences and propose a method to jointly model such structured relation and the unstructured sentence for question generation. The structured answer-relevant relation is likely to be to the point context and thus can help keep the generated question to the point. For example, Figure FIGREF1 shows our framework can extract the right answer-relevant relation (“The daily mean temperature in January”, “is”, “32.6$^\\circ $F (0.3$^\\circ $C)”) among multiple facts. With the help of such structured information, our model is less likely to be confused by sentences with a complex structure. Specifically, we firstly extract multiple relations with an off-the-shelf Open Information Extraction (OpenIE) toolbox BIBREF7, then we select the relation that is most relevant to the answer with carefully designed heuristic rules."
],
"extractive_spans": [],
"free_form_answer": "Using the OpenIE toolbox and applying heuristic rules to select the most relevant relation.",
"highlighted_evidence": [
"Specifically, we firstly extract multiple relations with an off-the-shelf Open Information Extraction (OpenIE) toolbox BIBREF7, then we select the relation that is most relevant to the answer with carefully designed heuristic rules."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We utilize an off-the-shelf toolbox of OpenIE to the derive structured answer-relevant relations from sentences as to the point contexts. Relations extracted by OpenIE can be represented either in a triple format or in an n-ary format with several secondary arguments, and we employ the latter to keep the extractions as informative as possible and avoid extracting too many similar relations in different granularities from one sentence. We join all arguments in the extracted n-ary relation into a sequence as our to the point context. Figure FIGREF5 shows n-ary relations extracted from OpenIE. As we can see, OpenIE extracts multiple relations for complex sentences. Here we select the most informative relation according to three criteria in the order of descending importance: (1) having the maximal number of overlapped tokens between the answer and the relation; (2) being assigned the highest confidence score by OpenIE; (3) containing maximum non-stop words. As shown in Figure FIGREF5, our criteria can select answer-relevant relations (waved in Figure FIGREF5), which is especially useful for sentences with extraneous information. In rare cases, OpenIE cannot extract any relation, we treat the sentence itself as the to the point context."
],
"extractive_spans": [
"off-the-shelf toolbox of OpenIE"
],
"free_form_answer": "",
"highlighted_evidence": [
"We utilize an off-the-shelf toolbox of OpenIE to the derive structured answer-relevant relations from sentences as to the point contexts. Relations extracted by OpenIE can be represented either in a triple format or in an n-ary format with several secondary arguments, and we employ the latter to keep the extractions as informative as possible and avoid extracting too many similar relations in different granularities from one sentence."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"3f4ed76889d001c99b7a082c455b4cb4bda2a8f7"
],
"answer": [
{
"evidence": [
"Table TABREF30 shows automatic evaluation results for our model and baselines (copied from their papers). Our proposed model which combines structured answer-relevant relations and unstructured sentences achieves significant improvements over proximity-based answer-aware models BIBREF9, BIBREF15 on both dataset splits. Presumably, our structured answer-relevant relation is a generalization of the context explored by the proximity-based methods because they can only capture short dependencies around answer fragments while our extractions can capture both short and long dependencies given the answer fragments. Moreover, our proposed framework is a general one to jointly leverage structured relations and unstructured sentences. All compared baseline models which only consider unstructured sentences can be further enhanced under our framework.",
"FLOAT SELECTED: Table 4: The main experimental results for our model and several baselines. ‘-’ means no results reported in their papers. (Bn: BLEU-n, MET: METEOR, R-L: ROUGE-L)"
],
"extractive_spans": [],
"free_form_answer": "Metrics show better results on all metrics compared to baseline except Bleu1 on Zhou split (worse by 0.11 compared to baseline). Bleu1 score on DuSplit is 45.66 compared to best baseline 43.47, other metrics on average by 1",
"highlighted_evidence": [
"Table TABREF30 shows automatic evaluation results for our model and baselines (copied from their papers).",
"FLOAT SELECTED: Table 4: The main experimental results for our model and several baselines. ‘-’ means no results reported in their papers. (Bn: BLEU-n, MET: METEOR, R-L: ROUGE-L)"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"08cb7639659d1ef57912a51fc7dea8699c76d73c",
"092a3b2217de67dad49c2698db3c644efe8b11fb"
],
"answer": [
{
"evidence": [
"We evaluate with all commonly-used metrics in question generation BIBREF13: BLEU-1 (B1), BLEU-2 (B2), BLEU-3 (B3), BLEU-4 (B4) BIBREF17, METEOR (MET) BIBREF18 and ROUGE-L (R-L) BIBREF19. We use the evaluation script released by Chen2015MicrosoftCC."
],
"extractive_spans": [
"BLEU-1 (B1), BLEU-2 (B2), BLEU-3 (B3), BLEU-4 (B4) BIBREF17, METEOR (MET) BIBREF18 and ROUGE-L (R-L) BIBREF19"
],
"free_form_answer": "",
"highlighted_evidence": [
"We evaluate with all commonly-used metrics in question generation BIBREF13: BLEU-1 (B1), BLEU-2 (B2), BLEU-3 (B3), BLEU-4 (B4) BIBREF17, METEOR (MET) BIBREF18 and ROUGE-L (R-L) BIBREF19. We use the evaluation script released by Chen2015MicrosoftCC."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We evaluate with all commonly-used metrics in question generation BIBREF13: BLEU-1 (B1), BLEU-2 (B2), BLEU-3 (B3), BLEU-4 (B4) BIBREF17, METEOR (MET) BIBREF18 and ROUGE-L (R-L) BIBREF19. We use the evaluation script released by Chen2015MicrosoftCC."
],
"extractive_spans": [
"BLEU-1 (B1)",
"BLEU-2 (B2)",
"BLEU-3 (B3)",
"BLEU-4 (B4)",
"METEOR (MET)",
"ROUGE-L (R-L)"
],
"free_form_answer": "",
"highlighted_evidence": [
"We evaluate with all commonly-used metrics in question generation BIBREF13: BLEU-1 (B1), BLEU-2 (B2), BLEU-3 (B3), BLEU-4 (B4) BIBREF17, METEOR (MET) BIBREF18 and ROUGE-L (R-L) BIBREF19."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"531b59469d98b703ab5a644cb5a19e3dbcf0ab24",
"e6f16cab5965d826fd0553c8ad8e0d90ee8bc3f7"
],
"answer": [
{
"evidence": [
"We conduct experiments on the SQuAD dataset BIBREF3. It contains 536 Wikipedia articles and 100k crowd-sourced question-answer pairs. The questions are written by crowd-workers and the answers are spans of tokens in the articles. We employ two different data splits by following Zhou2017NeuralQG and Du2017LearningTA . In Zhou2017NeuralQG, the original SQuAD development set is evenly divided into dev and test sets, while Du2017LearningTA treats SQuAD development set as its development set and splits original SQuAD training set into a training set and a test set. We also filter out questions which do not have any overlapped non-stop words with the corresponding sentences and perform some preprocessing steps, such as tokenization and sentence splitting. The data statistics are given in Table TABREF27."
],
"extractive_spans": [
"SQuAD"
],
"free_form_answer": "",
"highlighted_evidence": [
"We conduct experiments on the SQuAD dataset BIBREF3."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Question Generation (QG) is the task of automatically creating questions from a range of inputs, such as natural language text BIBREF0, knowledge base BIBREF1 and image BIBREF2. QG is an increasingly important area in NLP with various application scenarios such as intelligence tutor systems, open-domain chatbots and question answering dataset construction. In this paper, we focus on question generation from reading comprehension materials like SQuAD BIBREF3. As shown in Figure FIGREF1, given a sentence in the reading comprehension paragraph and the text fragment (i.e., the answer) that we want to ask about, we aim to generate a question that is asked about the specified answer."
],
"extractive_spans": [
"SQuAD"
],
"free_form_answer": "",
"highlighted_evidence": [
"In this paper, we focus on question generation from reading comprehension materials like SQuAD BIBREF3. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
],
"nlp_background": [
"two",
"two",
"two",
"two"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"How they extract \"structured answer-relevant relation\"?",
"How big are significant improvements?",
"What metrics do they use?",
"On what datasets are experiments performed?"
],
"question_id": [
"1d197cbcac7b3f4015416f0152a6692e881ada6c",
"92294820ac0d9421f086139e816354970f066d8a",
"477d9d3376af4d938bb01280fe48d9ae7c9cf7f7",
"f225a9f923e4cdd836dd8fe097848da06ec3e0cc"
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 1: An example SQuAD question with the baseline’s prediction. The answer (“0.3 ◦C”) is highlighted.",
"Table 1: Performance for the average relative distance between the answer fragment and other non-stop sentence words that also appear in the ground truth question. (Bn: BLEU-n, MET: METEOR, R-L: ROUGE-L)",
"Table 2: Comparisons between sentences and answerrelevant relations. Overlapped words are those non-stop tokens co-occurring in the source (sentence/relation) and the target question. Copy ratio means the proportion of source tokens that are used in the question.",
"Figure 2: Examples for n-ary extractions from sentences using OpenIE. Confidence scores are shown at the beginning of each relation. Answers are highlighted in sentences. Waved relations are selected according to our criteria in Section 2.2.",
"Figure 3: The framework of our proposed model. (Best viewed in color)",
"Table 3: Dataset statistics on Du Split (Du et al., 2017) and Zhou Split (Zhou et al., 2017).",
"Table 4: The main experimental results for our model and several baselines. ‘-’ means no results reported in their papers. (Bn: BLEU-n, MET: METEOR, R-L: ROUGE-L)",
"Table 5: Performance for the average relative distance between the answer fragment and other non-stop sentence words that also appear in the ground truth question (BLEU is the average over BLEU-1 to BLEU-4). Values in parenthesis are the improvement percentage of Our Model over Hybrid. (a) is based on all sentences while (b) only considers long sentences with more than 20 words.",
"Figure 4: Example questions (with answers highlighted) generated by crowd-workers (ground truth questions), the baseline model and our model.",
"Figure 5: Example diverse questions (with answers highlighted) generated by our model with different answer-relevant relations."
],
"file": [
"1-Figure1-1.png",
"2-Table1-1.png",
"3-Table2-1.png",
"3-Figure2-1.png",
"4-Figure3-1.png",
"5-Table3-1.png",
"6-Table4-1.png",
"7-Table5-1.png",
"7-Figure4-1.png",
"8-Figure5-1.png"
]
} | [
"How they extract \"structured answer-relevant relation\"?",
"How big are significant improvements?"
] | [
[
"1910.06036-Introduction-4",
"1910.06036-Framework Description ::: Answer-relevant Relation Extraction-0"
],
[
"1910.06036-6-Table4-1.png",
"1910.06036-Results and Analysis ::: Main Results-0"
]
] | [
"Using the OpenIE toolbox and applying heuristic rules to select the most relevant relation.",
"Metrics show better results on all metrics compared to baseline except Bleu1 on Zhou split (worse by 0.11 compared to baseline). Bleu1 score on DuSplit is 45.66 compared to best baseline 43.47, other metrics on average by 1"
] | 145 |
2002.01984 | UNCC Biomedical Semantic Question Answering Systems. BioASQ: Task-7B, Phase-B | In this paper, we detail our submission to the 2019, 7th year, BioASQ competition. We present our approach for Task-7b, Phase B, Exact Answering Task. These Question Answering (QA) tasks include Factoid, Yes/No, List Type Question answering. Our system is based on a contextual word embedding model. We have used a Bidirectional Encoder Representations from Transformers(BERT) based system, fined tuned for biomedical question answering task using BioBERT. In the third test batch set, our system achieved the highest MRR score for Factoid Question Answering task. Also, for List type question answering task our system achieved the highest recall score in the fourth test batch set. Along with our detailed approach, we present the results for our submissions, and also highlight identified downsides for our current approach and ways to improve them in our future experiments. | {
"paragraphs": [
[
"BioASQ is a biomedical document classification, document retrieval, and question answering competition, currently in its seventh year. We provide an overview of our submissions to semantic question answering task (7b, Phase B) of BioASQ 7 (except for 'ideal answer' test, in which we did not participate this year). In this task systems are provided with biomedical questions and are required to submit ideal and exact answers to those questions. We have used BioBERT BIBREF0 based system , see also Bidirectional Encoder Representations from Transformers(BERT) BIBREF1, and we fine tuned it for the biomedical question answering task. Our system scored near the top for factoid questions for all the batches of the challenge. More specifially, in the third test batch set, our system achieved highest ‘MRR’ score for Factoid Question Answering task. Also, for List-type question answering task our system achieved highest recall score in the fourth test batch set. Along with our detailed approach, we present the results for our submissions and also highlight identified downsides for our current approach and ways to improve them in our future experiments. In last test batch results we placed 4th for List-type questions and 3rd for Factoid-type questions.)",
"The QA task is organized in two phases. Phase A deals with retrieval of the relevant document, snippets, concepts, and RDF triples, and phase B deals with exact and ideal answer generations (which is a paragraph size summary of snippets). Exact answer generation is required for factoid, list, and yes/no type question.",
"BioASQ organizers provide the training and testing data. The training data consists of questions, gold standard documents, snippets, concepts, and ideal answers (which we did not use in this paper, but we used last year BIBREF2). The test data is split between phases A and B. The Phase A dataset consists of the questions, unique ids, question types. The Phase B dataset consists of the questions, golden standard documents, snippets, unique ids and question types. Exact answers for factoid type questions are evaluated using strict accuracy (the top answer), lenient accuracy (the top 5 answers), and MRR (Mean Reciprocal Rank) which takes into account the ranks of returned answers. Answers for the list type question are evaluated using precision, recall, and F-measure."
],
[
"Sharma et al. BIBREF3 describe a system with two stage process for factoid and list type question answering. Their system extracts relevant entities and then runs supervised classifier to rank the entities. Wiese et al. BIBREF4 propose neural network based model for Factoid and List-type question answering task. The model is based on Fast QA and predicts the answer span in the passage for a given question. The model is trained on SQuAD data set and fine tuned on the BioASQ data. Dimitriadis et al. BIBREF5 proposed two stage process for Factoid question answering task. Their system uses general purpose tools such as Metamap, BeCas to identify candidate sentences. These candidate sentences are represented in the form of features, and are then ranked by the binary classifier. Classifier is trained on candidate sentences extracted from relevant questions, snippets and correct answers from BioASQ challenge. For factoid question answering task highest ‘MRR’ achieved in the 6th edition of BioASQ competition is ‘0.4325’. Our system is a neural network model based on contextual word embeddings BIBREF1 and achieved a ‘MRR’ score ‘0.6103’ in one of the test batches for Factoid Question Answering task."
],
[
"BERT stands for \"Bidirectional Encoder Representations from Transformers\" BIBREF1 is a contextual word embedding model. Given a sentence as an input, contextual embedding for the words are returned. The BERT model was designed so it can be fine tuned for 11 different tasks BIBREF1, including question answering tasks. For a question answering task, question and paragraph (context) are given as an input. A BERT standard is that question text and paragraph text are separated by a separator [Sep]. BERT question-answering fine tuning involves adding softmax layer. Softmax layer takes contextual word embeddings from BERT as input and learns to identity answer span present in the paragraph (context). This process is represented in Figure FIGREF4.",
"BERT was originally trained to perform tasks such as language model creation using masked words and next-sentence-prediction. In other words BERT weights are learned such that context is used in building the representation of the word, not just as a loss function to help learn a context-independent representation. For detailed understanding of BERT Architecture, please refer to the original BERT paper BIBREF1."
],
[
"A ‘word embedding’ is a learned representation. It is represented in the form of vector where words that have the same meaning have a similar vector representation. Consider a word embedding model 'word2vec' BIBREF6 trained on a corpus. Word embeddings generated from the model are context independent that is, word embeddings are returned regardless of where the words appear in a sentence and regardless of e.g. the sentiment of the sentence. However, contextual word embedding models like BERT also takes context of the word into consideration."
],
[
"‘BERT’ and BioBERT are very similar in terms of architecture. Difference is that ‘BERT’ is pretrained on Wikipedia articles, whereas BioBERT version used in our experiments is pretrained on Wikipedia, PMC and PubMed articles. Therefore BioBERT model is expected to perform well with biomedical text, in terms of generating contextual word embeddings.",
"BioBERT model used in our experiments is based on BERT-Base Architecture; BERT-Base has 12 transformer Layers where as BERT-Large has 24 transformer layers. Moreover contextual word embedding vector size is 768 for BERT-Base and more for BERT-large. According to BIBREF1 Bert-Large, fine-tuned on SQuAD 1.1 question answering data BIBREF7 can achieve F1 Score of 90.9 for Question Answering task where as BERT-Base Fine-tuned on the same SQuAD question answering BIBREF7 data could achieve F1 score of 88.5. One downside of the current version BioBERT is that word-piece vocabulary is the same as that of original BERT Model, as a result word-piece vocabulary does not include biomedical jargon. Lee et al. BIBREF0 created BioBERT, using the same pre-trained BERT released by Google, and hence in the word-piece vocabulary (vocab.txt), as a result biomedical jargon is not included in word-piece vocabulary. Modifying word-piece vocabulary (vocab.txt) at this stage would loose original compatibility with ‘BERT’, hence it is left unmodified.",
"In our future work we would like to build pre-trained ‘BERT’ model from scratch. We would pretrain the model with biomedical corpus (PubMed, ‘PMC’) and Wikipedia. Doing so would give us scope to create word piece vocabulary to include biomedical jargon and there are chances of model performing better with biomedical jargon being included in the word piece vocabulary. We will consider this scenario in the future, or wait for the next version of BioBERT."
],
[
"For Factoid Question Answering task, we fine tuned BioBERT BIBREF0 with question answering data and added new features. Fig. FIGREF4 shows the architecture of BioBERT fine tuned for question answering tasks: Input to BioBERT is word tokenized embeddings for question and the paragraph (Context). As per the ‘BERT’ BIBREF1 standards, tokens ‘[CLS]’ and ‘[SEP]’ are appended to the tokenized input as illustrated in the figure. The resulting model has a softmax layer formed for predicting answer span indices in the given paragraph (Context). On test data, the fine tuned model generates $n$-best predictions for each question. For a question, $n$-best corresponds that $n$ answers are returned as possible answers in the decreasing order of confidence. Variable $n$ is configurable. In our paper, any further mentions of ‘answer returned by the model’ correspond to the top answer returned by the model."
],
[
"BioASQ provides the training data. This data is based on previous BioASQ competitions. Train data we have considered is aggregate of all train data sets till the 5th version of BioASQ competition. We cleaned the data, that is, question-answering data without answers are removed and left with a total count of ‘530’ question answers. The data is split into train and test data in the ratio of 94 to 6; that is, count of '495' for training and '35' for testing.",
"The original data format is converted to the BERT/BioBERT format, where BioBERT expects ‘start_index’ of the actual answer. The ‘start_index corresponds to the index of the answer text present in the paragraph/ Context. For finding ‘start_index’ we used built-in python function find(). The function returns the lowest index of the actual answer present in the context(paragraph). If the answer is not found ‘-1’ is returned as the index. The efficient way of finding start_index is, if the paragraph (Context) has multiple instances of answer text, then ‘start_index’ of the answer should be that instance of answer text whose context actually matches with what’s been asked in the question.",
"Example (Question, Answer and Paragraph from BIBREF8):",
"Question: Which drug should be used as an antidote in benzodiazepine overdose?",
"Answer: 'Flumazenil'",
"Paragraph(context):",
"\"Flumazenil use in benzodiazepine overdose in the UK: a retrospective survey of NPIS data. OBJECTIVE: Benzodiazepine (BZD) overdose (OD) continues to cause significant morbidity and mortality in the UK. Flumazenil is an effective antidote but there is a risk of seizures, particularly in those who have co-ingested tricyclic antidepressants. A study was undertaken to examine the frequency of use, safety and efficacy of flumazenil in the management of BZD OD in the UK. METHODS: A 2-year retrospective cohort study was performed of all enquiries to the UK National Poisons Information Service involving BZD OD. RESULTS: Flumazenil was administered to 80 patients in 4504 BZD-related enquiries, 68 of whom did not have ventilatory failure or had recognised contraindications to flumazenil. Factors associated with flumazenil use were increased age, severe poisoning and ventilatory failure. Co-ingestion of tricyclic antidepressants and chronic obstructive pulmonary disease did not influence flumazenil administration. Seizure frequency in patients not treated with flumazenil was 0.3%\".",
"Actual answer is 'Flumazenil', but there are multiple instances of word 'Flu-mazenil'. Efficient way to identify the start-index for 'Flumazenil'(answer) is to find that particular instance of the word 'Flumazenil' which matches the context for the question. In the above example 'Flumazenil' highlighted in bold is the actual instance that matches question's context. Unfortunately, we could not identify readily available tools that can achieve this goal. In our future work, we look forward to handling these scenarios effectively.",
"Note: The creators of 'SQuAD' BIBREF7 have handled the task of identifying answer's start_index effectively. But 'SQuAD' data set is much more general and does not include biomedical question answering data."
],
[
"During our training with the BioASQ data, learning rate is set to 3e-5, as mentioned in the BioBERT paper BIBREF0. We started training the model with 495 available train data and 35 test data by setting number of epochs to 50. After training with these hyper-parameters training accuracy(exact match) was 99.3%(overfitting) and testing accuracy is only 4%. In the next iteration we reduced the number of epochs to 25 then training accuracy is reduced to 98.5% and test accuracy moved to 5%. We further reduced number of epochs to 15, and the resulting training accuracy was 70% and test accuracy 15%. In the next iteration set number of epochs to 12 and achieved train accuracy of 57.7% and test accuracy 23.3%. Repeated the experiment with 11 epochs and found training accuracy to be 57.7% and test accuracy to be same 22%. In the next iteration we set number of epochs to '9' and found training accuracy of 48% and test accuracy of 15%. Hence optimum number of epochs is taken as 12 epochs.",
"During our error analysis we found that on test data, model tends to return text in the beginning of the context(paragraph) as the answer. On analysing train data, we found that there are '120'(out of '495') question answering data instances having start_index:0, meaning 120( 25%) question answering data has first word(s) in the context(paragraph) as the answer. We removed 70% of those instances in order to make train data more balanced. In the new train data set we are left with '411' question answering data instances. This time we got the highest test accuracy of 26% at 11 epochs. We have submitted our results for BioASQ test batch-2, got strict accuracy of 32% and our system stood in 2nd place. Initially, hyper-parameter- 'batch size' is set to ‘400’. Later it is tuned to '32'. Although accuracy(exact answer match) remained at 26%, model generated concise and better answers at batch size ‘32’, that is wrong answers are close to the expected answer in good number of cases.",
"Example.(from BIBREF8)",
"Question: Which mutated gene causes Chediak Higashi Syndrome?",
"Exact Answer: ‘lysosomal trafficking regulator gene’.",
"The answer returned by a model trained at ‘400’ batch size is ‘Autosomal-recessive complicated spastic paraplegia with a novel lysosomal trafficking regulator’, and from the one trained at ‘32’ batch size is ‘lysosomal trafficking regulator’.",
"In further experiments, we have fine tuned the BioBERT model with both ‘SQuAD’ dataset (version 2.0) and BioAsq train data. For training on ‘SQuAD’, hyper parameters- Learning rate and number of epochs are set to ‘3e-3’ and ‘3’ respectively as mentioned in the paper BIBREF1. Test accuracy of the model boosted to 44%. In one more experiment we trained model only on ‘SQuAD’ dataset, this time test accuracy of the model moved to 47%. The reason model did not perform up to the mark when trained with ‘SQuAD’ alongside BioASQ data could be that in formatted BioASQ data, start_index for the answer is not accurate, and affected the overall accuracy."
],
[
"We have experimented with several systems and their variations, e.g. created by training with specific additional features (see next subsection). Here is their list and short descriptions. Unfortunately we did not pay attention to naming, and the systems evolved between test batches, so the overall picture can only be understood by looking at the details.",
"When we started the experiments our objective was to see whether BioBERT and entailment-based techniques can provide value for in the context of biomedical question answering. The answer to both questions was a yes, qualified by many examples clearly showing the limitations of both methods. Therefore we tried to address some of these limitations using feature engineering with mixed results: some clear errors got corrected and new errors got introduced, without overall improvement but convincing us that in future experiments it might be worth trying feature engineering again especially if more training data were available.",
"Overall we experimented with several approaches with the following aspects of the systems changing between batches, that is being absent or present:",
"training on BioAsq data vs. training on SQuAD",
"using the BioAsq snippets for context vs. using the documents from the provided URLs for context",
"adding or not the LAT, i.e. lexical answer type, feature (see BIBREF9, BIBREF10 and an explanation in the subsection just below).",
"For Yes/No questions (only) we experimented with the entailment methods.",
"We will discuss the performance of these models below and in Section 6. But before we do that, let us discuss a feature engineering experiment which eventually produced mixed results, but where we feel it is potentially useful in future experiments."
],
[
"During error analysis we found that for some cases, answer being returned by the model is far away from what it is being asked in the Question.",
"Example: (from BIBREF8)",
"Question: Hy's law measures failure of which organ?",
"Actual Answer: ‘Liver’.",
"The answer returned by one of our models was ‘alanine aminotransferase’, which is an enzyme. The model returns an enzyme, when the question asked for the organ name. To address this type of errors, we decided to try the concepts of ‘Lexical Answer Type’ (LAT) and Focus Word, which was used in IBM Watson, see BIBREF11 for overview; BIBREF10 for technical details, and BIBREF9 for details on question analysis. In an example given in the last source we read:",
"POETS & POETRY: He was a bank clerk in the Yukon before he published \"Songs of a Sourdough\" in 1907.",
"The focus is the part of the question that is a reference to the answer. In the example above, the focus is \"he\".",
"LATs are terms in the question that indicate what type of entity is being asked for.",
"The headword of the focus is generally a LAT, but questions often contain additional LATs, and in the Jeopardy! domain, categories are an additional source of LATs.",
"(...) In the example, LATs are \"he\", \"clerk\", and \"poet\".",
"For example in the question \"Which plant does oleuropein originate from?\" (BIBREF8). The LAT here is ‘plant’. For the BioAsq task we did not need to explicitly distinguish between the focus and the LAT concepts. In this example, the expectation is that answer returned by the model is a plant. Thus it is conceivable that the cosine distance between contextual embedding of word 'plant' in the question and contextual embedding for the answer present in the paragraph(context) is comparatively low. As a result model learns to adjust its weights during training phase and returns answers with low cosine distance with the LAT.",
"We used Stanford CoreNLP BIBREF12 library to write rules for extracting lexical answer type present in the question, both 'parts of speech'(POS) and dependency parsing functionality was used. We incorporated the Lexical Answer Type into one of our systems, UNCC_QA1 in Batch 4. This system underperformed our system FACTOIDS by about 3% in the MRR measure, but corrected errors such as in the example above."
],
[
"There are different question types: ‘Which’, ‘What’, ‘When’, ‘How’ etc. Each type of question is being handled differently and there are commonalities among the rules written for different question types. Question words are identified through parts of speech tags: 'WDT', 'WRB' ,'WP'. We assumed that LAT is a ‘Noun’ and follows the question word. Often it was also a subject (nsubj). This process is illustrated in Fig.FIGREF15.",
"LAT computation was governed by a few simple rules, e.g. when a question has multiple words that are 'Subjects’ (and ‘Noun’), a word that is in proximity to the question word is considered as ‘LAT’. These rules are different for each \"Wh\" word.",
"Namely, when the word immediately following the question word is a Noun, window size is set to ‘3’. The window size ‘3’ means we iterate through the next ‘3’ words to check if any of the word is both Noun and Subject, If so, such word is considered the ‘LAT’; else the word that is present very next to the question word is considered as the ‘LAT’.",
"For questions with words ‘Which’ , ‘What’, ‘When’; a Noun immediately following the question word is very often the LAT, e.g. 'enzyme' in Which enzyme is targeted by Evolocumab?. When the word immediately following the question word is not a Noun, e.g. in What is the function of the protein Magt1? the window size is set to ‘5’, and we iterate through the next ‘5’ words (if present) and search for the word that is both Noun and Subject. If present, the word is considered as the ‘LAT’; else, the Noun in close proximity to the question word and following it is returned as the ‘LAT’.",
"For questions with question words: ‘When’, ‘Who’, ‘Why’, the ’LAT’ is a question word itself. For the word ‘How', e.g. in How many selenoproteins are encoded in the human genome?, we look at the adjective and if we find one, we take it to be the LAT, otherwise the word 'How' is considered as the ‘LAT’.",
"Perhaps because of using only very simple rules, the accuracy for ‘LAT’ derivation is 75%; that is, in the remaining 25% of the cases the LAT word is identified incorrectly. Worth noting is that the overall performance the system that used LATs was slightly inferior to the system without LATs, but the types of errors changed. The training used BioBERT with the LAT feature as part of the input string. The errors it introduces usually involve finding the wrong element of the correct type e.g. wrong enzyme when two similar enzymes are described in the text, or 'neuron' when asked about a type of cell with a certain function, when the answer calls for a different cell category, adipocytes, and both are mentioned in the text. We feel with more data and additional tuning or perhaps using an ensemble model, we might be able to keep the correct answers, and improve the results on the confusing examples like the one mentioned above. Therefore if we improve our ‘LAT’ derivation logic, or have larger datasets, then perhaps the neural network techniques they will yield better results."
],
[
"Training on BioAsq data in our entry in Batch 1 and Batch 2 under the name QA1 showed it might lead to overfitting. This happened both with (Batch 2) and without (Batch 1) hyperparameters tuning: abysmal 18% MRR in Batch 1, and slighly better one, 40% in Batch 2 (although in Batch 2 it was overall the second best result in MRR but 16% lower than the highest score).",
"In Batch 3 (only), our UNCC_QA3 system was fine tuned on BioAsq and SQuAD 2.0 BIBREF7, and for data preprocessing Context paragraph is generated from relevant snippets provided in the test data. This system underperformed, by about 2% in MRR, our other entry UNCC_QA1, which was also an overall category winner for this batch. The latter was also trained on SQuAD, but not on BioAsq. We suspect that the reason could be the simplistic nature of the find() function described in Section 3.1. So, this could be an area where a better algorithm for finding the best occurrence of an entity could improve performance."
],
[
"In some experiments, for context in testing, we used documents for which URL pointers are provided in BioAsq. However, our system UNCC_QA3 underperformed our other system tested only on the provided snippets.",
"In Batch 5 the underperformance was about 6% of MRR, compared to our best system UNCC_QA1, and by 9% to the top performer."
],
[
"Our work focused on Factoid questions. But we also have done experiments on List-type and Yes/No questions."
],
[
"We started by answering always YES (in batch 2 and 3) to get the baseline performance. For batch 4 we used entailment. Our algorithm was very simple: Given a question we iterate through the candidate sentences and try to find any candidate sentence is contradicting the question (with confidence over 50%), if so 'No' is returned as answer, else 'Yes' is returned. In batch 4 this strategy produced better than the BioAsq baseline performance, and compared to our other systems, the use of entailment increased the performance by about 13% (macro F1 score). We used 'AllenNlp' BIBREF13 entailment library to find entailment of the candidate sentences with question."
],
[
"Overall, we followed the similar strategy that's been followed for Factoid Question Answering task. We started our experiment with batch 2, where we submitted 20 best answers (with context from snippets). Starting with batch 3, we performed post processing: once models generate answer predictions (n-best predictions), we do post-processing on the predicted answers. In test batch 4, our system (called FACTOIDS) achieved highest recall score of ‘0.7033’ but low precision of 0.1119, leaving open the question of how could we have better balanced the two measures.",
"In the post-processing phase, we take the top ‘20’ (batch 3) and top 5 (batch 4 and 5), predicted answers, tokenize them using common separators: 'comma' , 'and', 'also', 'as well as'. Tokens with characters count more than ‘100’ are eliminated and rest of the tokens are added to the list of possible answers. BioASQ evaluation mechanism does not consider snippets with more than ‘100’ characters as a valid answer. Considering lengthy snippets in to the list of answers would reduce the mean precision score. As a final step, duplicate snippets in the answer pool are removed. For example, consider these top 3 answers predicted by the system (before post-processing):",
"{",
"\"text\": \"dendritic cells\",",
"\"probability\": 0.7554540733426441,",
"\"start_logit\": 8.466046333312988,",
"\"end_logit\": 9.536355018615723",
"},",
"{",
"\"text\": \"neutrophils, macrophages and",
"distinct subtypes of dendritic cells\",",
"\"probability\": 0.13806867348304214,",
"\"start_logit\": 6.766478538513184,",
"\"end_logit\": 9.536355018615723",
"},",
"{",
"\"text\": \"macrophages and distinct subtypes of dendritic\",",
"\"probability\": 0.013973475271178242,",
"\"start_logit\": 6.766478538513184,",
"\"end_logit\": 7.24576473236084",
"},",
"After execution of post-processing heuristics, the list of answers returned is as follows:",
"[\"dendritic cells\"],",
"[\"neutrophils\"],",
"[\"macrophages\"],",
"[\"distinct subtypes of dendritic cells\"]"
],
[
"The tables below summarize all our results. They show that the performance of our systems was mixed. The simple architectures and algorithm we used worked very well only in Batch 3. However, we feel we can built a better system based on this experience. In particular we observed both the value of contextual embeddings and of feature engineering (LAT), however we failed to combine them properly."
],
[
"System description for ‘UNCC_QA1’: The system was finetuned on the SQuAD 2.0. For data preprocessing Context / paragraph was generated from relevant snippets provided in the test data.",
"System description for ‘QA1’ : ‘LAT’ feature was added and finetuned with SQuAD 2.0. For data preprocessing Context / paragraph was generated from relevant snippets provided in the test data.",
"System description for ‘UNCC_QA3’ : Fine tuning process is same as it is done for the system ‘UNCC_QA1’ in test batch-5. Difference is during data preprocessing, Context/paragraph is generated from the relevant documents for which URLS are included in the test data."
],
[
"For List-type questions, although post processing helped in the later batches, we never managed to obtain competitive precision, although our recall was good."
],
[
"The only thing worth remembering from our performance is that using entailment can have a measurable impact (at least with respect to a weak baseline). The results (weak) are in Table 3."
],
[
"In contrast to 2018, when we submitted BIBREF2 to BioASQ a system based on extractive summarization (and scored very high in the ideal answer category), this year we mainly targeted factoid question answering task and focused on experimenting with BioBERT. After these experiments we see the promise of BioBERT in QA tasks, but we also see its limitations. The latter we tried to address with mixed results using feature engineering. Overall these experiments allowed us to secure a best and a second best score in different test batches. Along with Factoid-type question, we also tried ‘Yes/No’ and ‘List’-type questions, and did reasonably well with our very simple approach.",
"For Yes/No the moral worth remembering is that reasoning has a potential to influence results, as evidenced by our adding the AllenNLP entailment BIBREF13 system increased its performance.",
"All our data and software is available at Github, in the previously referenced URL (end of Section 2)."
],
[
"In the current model, we have a shallow neural network with a softmax layer for predicting answer span. Shallow networks however are not good at generalizations. In our future experiments we would like to create dense question answering neural network with a softmax layer for predicting answer span. The main idea is to get contextual word embedding for the words present in the question and paragraph (Context) and feed the contextual word embeddings retrieved from the last layer of BioBERT to the dense question answering network. The mentioned dense layered question answering neural network need to be tuned for finding right hyper parameters. An example of such architecture is shown in Fig.FIGREF30.",
"In one more experiment, we would like to add a better version of ‘LAT’ contextual word embedding as a feature, along with the actual contextual word embeddings for question text, and Context and feed them as input to the dense question answering neural network. By this experiment, we would like to find if ‘LAT’ feature is improving overall answer prediction accuracy. Adding ‘LAT’ feature this way instead of feeding this word piece embedding directly to the BioBERT (as we did in our above experiments) would not downgrade the quality of contextual word embeddings generated form ‘BioBERT'. Quality contextual word embeddings would lead to efficient transfer learning and chances are that it would improve the model's answer prediction accuracy.",
"We also see potential for incorporating domain specific inference into the task e.g. using the MedNLI dataset BIBREF14. For all types of experiments it might be worth exploring clinical BERT embeddings BIBREF15, explicitly incorporating domain knowledge (e.g. BIBREF16) and possibly deeper discourse representations (e.g. BIBREF17)."
],
[
"In this appendix we provide additional details about the implementations."
],
[
"We used several variants of our systems when experimenting with the BioASQ problems. In retrospect, it would be much easier to understand the changes if we adopted some mnemonic conventions in naming the systems. So, we apologize for the names that do not reflect the modifications, and necessitate this list."
],
[
"We preprocessed the test data to convert test data to BioBERT format, We generated Context/paragraph by either aggregating relevant snippets provided or by aggregating documents for which URLS are provided in the BioASQ test data."
],
[
"We generated Context/paragraph by aggregating relevant snippets available in the test data and mapped it against the question text and question id. We ignored the content present in the documents (document URLS were provided in the original test data). The model is finetuned with BioASQ data.",
"data preprocessing is done in the same way as it is done for test batch-1. Model fine tuned on BioASQ data.",
"‘LAT’/ Focus word feature added and fine tuned with SQuAD 2.0 [reference]. For data preprocessing Context / paragraph is generated from relevant snippets provided in the test data."
],
[
"System is finetuned on the SQuAD 2.0 [reference]. For data preprocessing Context / paragraph is generated from relevant snippets provided in the test data.",
"‘LAT’/ Focus word feature added and fine tuned with SQuAD 2.0 [reference]. For data preprocessing Context / paragraph is generated from relevant snippets provided in the test data.",
"The System is finetuned on the SQuAD 2.0. For data preprocessing Context / paragraph is generated from relevant snippets provided in the test data."
],
[
"System is finetuned on the SQuAD 2.0 [reference] and BioASQ dataset[].For data preprocessing Context / paragraph is generated from relevant snippets provided in the test data.",
"Fine tuning process is same as it is done for the system ‘UNCC_QA_1’ in test batch-5. Difference is during data preprocessing, Context/paragraph is generated form from the relevant documents for which URLS are included in the test data."
],
[
"Fine tuning process is same as for ‘UNCC_QA_1 ’. Difference is Context/paragraph is generated form from the relevant documents for which URLS are included in the test data. System ‘UNCC_QA_1’ got the highest ‘MRR’ score in the 3rd test batch set."
],
[
"The System is finetuned on the SQuAD 2.0. For data preprocessing Context / paragraph is generated from relevant snippets provided in the test data."
],
[
"We attempted List type questions starting from test batch ‘2’. Used similar approach that's been followed for Factoid Question answering task. For all the test batch sets, in the data pre processing phase Context/ paragraph is generated either by aggregating relevant snippets or by aggregating documents(URLS) provided in the BioASQ test data.",
"For test batch-2, model (System: QA1) is finetuned on BioASQ data and submitted top ‘20’ answers predicted by the model as the list of answers. system ‘QA1’ achieved low F-Measure score:‘0.0786’ in the second test batch. In the further test batches for List type questions, we finetuned the model on Squad data set [reference], implemented post processing techniques (refer section 5.2) and achieved a better F-measure score: ‘0.2862’ in the final test batch set.",
"In test batch-3 (Systems : ‘QA1’/’’UNCC_QA_1’/’UNCC_QA3’/’UNCC_QA2’) top 20 answers returned by the model is sent for post processing and in test batch 4 and 5 only top 5 answers are sent for post processing. System UNCC_QA2(in batch 3) for List type question answering, Context is generated from documents for which URLS are provided in the BioASQ test data. for the rest of the systems (in test batch-3) for List Type question answering task snippets present in the BioaSQ test data are used to generate context.",
"In test batch-4 (System : ‘FACTOIDS’/’UNCC_QA_1’/’UNCC_QA3’’) top 5 answers returned by the model is sent for post processing. In case of system ‘FACTOIDS’ snippets in the test data were used to generate context. for systems ’UNCC_QA_1’ and ’UNCC_QA3’ context is generated from the documents for which URLS are provided in the BioASQ test data.",
"In test batch-5 ( Systems: ‘QA1’/’UNCC_QA_1’/’UNCC_QA3’/’UNCC_QA2’ ) our approach is the same as that of test batch-4 where top 5 answers returned by the model is sent for post processing. for all the systems (in test batch-5) context is generated from the snippets provided in the BioASQ test data."
],
[
"For the first 3 test batches, We have submitted answer ‘Yes’ to all the questions. Later, we employed ‘Sentence Entailment’ techniques(refer section 6.0) for the fourth and fifth test batch sets. Our Systems with ‘Sentence Entailment’ approach (for ‘Yes’/ ‘No’ question answering): ‘UNCC_QA_1’(test batch-4), UNCC_QA3(test batch-5)."
],
[
"We used Textual Entailment in Batch 4 and 5 for ‘Yes’/‘No’ question type. The algorithm was very simple: Given a question we iterate through the candidate sentences, and look for any candidate sentences contradicting the question. If we find one 'No' is returned as answer, else 'Yes' is returned. (The confidence for contradiction was set at 50%) We used AllenNLP BIBREF13 entailment library to find entailment of the candidate sentences with question.",
"Flow Chart for Yes/No Question answer processing is shown in Fig.FIGREF51"
],
[
"There are different question types, and we distinguished them based on the question words: ‘Which’, ‘What’, ‘When’, ‘How’ etc. Each type of question is being handled differently and there are commonalities among the rules written for different question types. How are question words identified? question words have parts of speech(POS): 'WDT', 'WRB', 'WP'.",
"Assumptions:",
"1) Lexical answer type (‘LAT’) or focus word is of type Noun and follows the question word.",
"2) The LAT word is a Subject. (This clearly not always true, but we used a very simple method). Note: ‘StanfordNLP’ dependency parsing tag for identifying subject is 'nsubj' or 'nsubjpass'.",
"3) When a question has multiple words that are of type Subject (and Noun), a word that is in proximity to the question word is considered as ‘LAT’.",
"4) For questions with question words: ‘When’, ‘Who’, ‘Why’, the ’LAT’ is a question word itself that is, ‘When’, ‘Who’, ‘Why’ respectively.",
"Rules and logic flow to traverse a question: The three cases below describe the logic flow of finding LATs. The figures show the grammatical structures used for this purpose."
],
[
"Question with question word ‘How’.",
"For questions with question word 'How', the adjective that follows the question word is considered as ‘LAT’ (need not follow immediately). If an adjective is absent, word 'How' is considered as ‘LAT’. When there are multiple words that are adjectives, a word in close proximity to the question word and follows it is returned as ‘LAT’. Note: The part of speech tag to identify adjectives is 'JJ'. For Other possible question words like ‘whose’. ‘LAT’/Focus word is question words itself.",
"Example Question: How many selenoproteins are encoded in the human genome?"
],
[
"Questions with question words ‘Which’ , ‘What’ and all other possible question words; a 'Noun' immediately following the question word.",
"Example Question: Which enzyme is targeted by Evolocumab?",
"Here, Focus word/LAT is ‘enzyme’ which is both Noun and Subject and immediately follows the question word.",
"When the word immediately following the question word is a noun, the window size is set to ‘3’. This size ‘3’ means that we iterate through the next ‘3’ words (if present) to check if any of the word is both 'Noun' and 'Subject', If so, the word is considered as ‘LAT’/Focus Word. Else the word that is present very next to the question word is considered as ‘LAT’."
],
[
"Questions with question words ‘Which’ , ‘What’ and all other possible question words; word immediately following the question word is not a 'Noun'.",
"Example Question: What is the function of the protein Magt1?",
"Here, Focus word/LAT is ‘function ’ which is both Noun and Subject and does not immediately follow the question word.",
"When the very next word following the question word is not a Noun, window size is set to ‘5’. Window size ‘5’ corresponds that we iterate through the next ‘5’ words (if present) and search for the word that is both Noun and Subject. If present, the word is considered as ‘LAT’. Else, the 'Noun' close proximity to the question word and follows it is returned as ‘LAT’.",
"Ad we mentioned earlier, the accuracy for ‘LAT’ derivation is 75 percent. But clearly the simple logic described above can be improved, as shown in BIBREF9, BIBREF10. Whether this in turn produces improvements in this particular task is an open question."
],
[
"In the current model, we have a shallow neural network with a softmax layer for predicting answer span. Shallow networks however are not good at generalizations. In our future experiments we would like to create dense question answering neural network with a softmax layer for predicting answer span. The main idea is to get contextual word embedding for the words present in the question and paragraph (Context) and feed the contextual word embeddings retrieved from the last layer of BioBERT to the dense question answering network. The mentioned dense layered question answering Neural network need to be tuned for finding right hyper parameters. An example of such architecture is shown in Fig.FIGREF30.",
"In another experiment we would like to only feed contextual word embeddings for Focus word/ ‘LAT’, paragraph/ Context as input to the question answering neural network. In this experiment we would neglect all embeddings for the question text except that of Focus word/ ‘LAT’. Our assumption and idea for considering focus word and neglecting remaining words in the question is that during training phase it would make more precise for the model to identify the focus of the question and map answers against the question’s focus. To validate our assumption, we would like to take sample question answering data and find the cosine distance between contextual embedding of Focus word and that of the actual answer and verify if the cosine distance is comparatively low in most of the cases.",
"In one more experiment, we would like to add a better version of ‘LAT’ contextual word embedding as a feature, along with the actual contextual word embeddings for question text, and Context and feed them as input to the dense question answering neural network. By this experiment, we would like to find if ‘LAT’ feature is improving overall answer prediction accuracy. Adding ‘LAT’ feature this way instead of feeding Focus word’s word piece embedding directly (as we did in our above experiments) to the BioBERT would not downgrade the quality of contextual word embeddings generated form ‘BioBERT'. Quality contextual word embeddings would lead to efficient transfer learning and chances are that it would improve the model's answer prediction accuracy."
]
],
"section_name": [
"Introduction",
"Related Work ::: BioAsq",
"Related Work ::: A minimum background on BERT",
"Related Work ::: A minimum background on BERT ::: Comparison of Word Embeddings and Contextual Word Embeddings",
"Related Work ::: Comparison of BERT and Bio-BERT",
"Experiments: Factoid Question Answering Task",
"Experiments: Factoid Question Answering Task ::: Setup",
"Experiments: Factoid Question Answering Task ::: Training and error analysis",
"Our Systems and Their Performance on Factoid Questions",
"Our Systems and Their Performance on Factoid Questions ::: LAT Feature considered and its impact (slightly negative)",
"Our Systems and Their Performance on Factoid Questions ::: LAT Feature considered and its impact (slightly negative) ::: Assumptions and rules for deriving lexical answer type.",
"Our Systems and Their Performance on Factoid Questions ::: Impact of Training using BioAsq data (slightly negative)",
"Our Systems and Their Performance on Factoid Questions ::: Impact of Using Context from URLs (negative)",
"Performance on Yes/No and List questions",
"Performance on Yes/No and List questions ::: Entailment improves Yes/No accuracy",
"Performance on Yes/No and List questions ::: For List-type the URLs have negative impact",
"Summary of our results",
"Summary of our results ::: Factoid questions ::: Systems used in Batch 5 experiments",
"Summary of our results ::: List Questions",
"Summary of our results ::: Yes/No questions",
"Discussion, Future Experiments, and Conclusions ::: Summary:",
"Discussion, Future Experiments, and Conclusions ::: Future experiments",
"APPENDIX",
"APPENDIX ::: Systems and their descriptions:",
"APPENDIX ::: Systems and their descriptions: ::: Factoid Type Question Answering:",
"APPENDIX ::: Systems and their descriptions: ::: System description for QA1:",
"APPENDIX ::: Systems and their descriptions: ::: System description for UNCC_QA_1:",
"APPENDIX ::: Systems and their descriptions: ::: System description for UNCC_QA3:",
"APPENDIX ::: Systems and their descriptions: ::: System description for UNCC_QA2:",
"APPENDIX ::: Systems and their descriptions: ::: System description for FACTOIDS:",
"APPENDIX ::: Systems and their descriptions: ::: List Type Questions:",
"APPENDIX ::: Systems and their descriptions: ::: Yes/No Type Questions:",
"APPENDIX ::: Additional details for Yes/No Type Questions",
"APPENDIX ::: Assumptions, rules and logic flow for deriving Lexical Answer Types from questions",
"APPENDIX ::: Assumptions, rules and logic flow for deriving Lexical Answer Types from questions ::: Case-1:",
"APPENDIX ::: Assumptions, rules and logic flow for deriving Lexical Answer Types from questions ::: Case-2:",
"APPENDIX ::: Assumptions, rules and logic flow for deriving Lexical Answer Types from questions ::: Case-3:",
"APPENDIX ::: Proposing Future Experiments"
]
} | {
"answers": [
{
"annotation_id": [
"3155c144cd93f2c73447bad2b751356a4e64a40c"
],
"answer": [
{
"evidence": [
"We started by answering always YES (in batch 2 and 3) to get the baseline performance. For batch 4 we used entailment. Our algorithm was very simple: Given a question we iterate through the candidate sentences and try to find any candidate sentence is contradicting the question (with confidence over 50%), if so 'No' is returned as answer, else 'Yes' is returned. In batch 4 this strategy produced better than the BioAsq baseline performance, and compared to our other systems, the use of entailment increased the performance by about 13% (macro F1 score). We used 'AllenNlp' BIBREF13 entailment library to find entailment of the candidate sentences with question."
],
"extractive_spans": [
"by answering always YES (in batch 2 and 3) "
],
"free_form_answer": "",
"highlighted_evidence": [
"We started by answering always YES (in batch 2 and 3) to get the baseline performance. For batch 4 we used entailment."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"1d1c363a451be0c5d94cc85ec253a8232218a9f7",
"fa8e7b5902764267d7dceb1ed108979e60fd9a91"
],
"answer": [
{
"evidence": [
"BioASQ organizers provide the training and testing data. The training data consists of questions, gold standard documents, snippets, concepts, and ideal answers (which we did not use in this paper, but we used last year BIBREF2). The test data is split between phases A and B. The Phase A dataset consists of the questions, unique ids, question types. The Phase B dataset consists of the questions, golden standard documents, snippets, unique ids and question types. Exact answers for factoid type questions are evaluated using strict accuracy (the top answer), lenient accuracy (the top 5 answers), and MRR (Mean Reciprocal Rank) which takes into account the ranks of returned answers. Answers for the list type question are evaluated using precision, recall, and F-measure."
],
"extractive_spans": [],
"free_form_answer": "BioASQ dataset",
"highlighted_evidence": [
"BioASQ organizers provide the training and testing data. The training data consists of questions, gold standard documents, snippets, concepts, and ideal answers (which we did not use in this paper, but we used last year BIBREF2). The test data is split between phases A and B. The Phase A dataset consists of the questions, unique ids, question types. The Phase B dataset consists of the questions, golden standard documents, snippets, unique ids and question types."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"BioASQ organizers provide the training and testing data. The training data consists of questions, gold standard documents, snippets, concepts, and ideal answers (which we did not use in this paper, but we used last year BIBREF2). The test data is split between phases A and B. The Phase A dataset consists of the questions, unique ids, question types. The Phase B dataset consists of the questions, golden standard documents, snippets, unique ids and question types. Exact answers for factoid type questions are evaluated using strict accuracy (the top answer), lenient accuracy (the top 5 answers), and MRR (Mean Reciprocal Rank) which takes into account the ranks of returned answers. Answers for the list type question are evaluated using precision, recall, and F-measure."
],
"extractive_spans": [],
"free_form_answer": "A dataset provided by BioASQ consisting of questions, gold standard documents, snippets, concepts and ideal and ideal answers.",
"highlighted_evidence": [
"BioASQ organizers provide the training and testing data. The training data consists of questions, gold standard documents, snippets, concepts, and ideal answers (which we did not use in this paper, but we used last year BIBREF2). The test data is split between phases A and B. The Phase A dataset consists of the questions, unique ids, question types. The Phase B dataset consists of the questions, golden standard documents, snippets, unique ids and question types. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"0b3e74772a78a9e240889088122b6e946c2e3f4d",
"82f372d5e603af2000c0d5697b90a81dfb82f692"
],
"answer": [
{
"evidence": [
"Overall, we followed the similar strategy that's been followed for Factoid Question Answering task. We started our experiment with batch 2, where we submitted 20 best answers (with context from snippets). Starting with batch 3, we performed post processing: once models generate answer predictions (n-best predictions), we do post-processing on the predicted answers. In test batch 4, our system (called FACTOIDS) achieved highest recall score of ‘0.7033’ but low precision of 0.1119, leaving open the question of how could we have better balanced the two measures."
],
"extractive_spans": [
"0.7033"
],
"free_form_answer": "",
"highlighted_evidence": [
"In test batch 4, our system (called FACTOIDS) achieved highest recall score of ‘0.7033’ but low precision of 0.1119, leaving open the question of how could we have better balanced the two measures."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Overall, we followed the similar strategy that's been followed for Factoid Question Answering task. We started our experiment with batch 2, where we submitted 20 best answers (with context from snippets). Starting with batch 3, we performed post processing: once models generate answer predictions (n-best predictions), we do post-processing on the predicted answers. In test batch 4, our system (called FACTOIDS) achieved highest recall score of ‘0.7033’ but low precision of 0.1119, leaving open the question of how could we have better balanced the two measures."
],
"extractive_spans": [
"0.7033"
],
"free_form_answer": "",
"highlighted_evidence": [
" In test batch 4, our system (called FACTOIDS) achieved highest recall score of ‘0.7033’ but low precision of 0.1119, leaving open the question of how could we have better balanced the two measures."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"b7418ff83cd506ac076ebfef7f5c2d86315477a0",
"efc0c304e8523ba0c012edeb37942e0fd58975f7"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 1: Factoid Questions. In Batch 3 we obtained the highest score. Also the relative distance between our best system and the top performing system shrunk between Batch 4 and 5."
],
"extractive_spans": [],
"free_form_answer": "0.5115",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Factoid Questions. In Batch 3 we obtained the highest score. Also the relative distance between our best system and the top performing system shrunk between Batch 4 and 5."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Sharma et al. BIBREF3 describe a system with two stage process for factoid and list type question answering. Their system extracts relevant entities and then runs supervised classifier to rank the entities. Wiese et al. BIBREF4 propose neural network based model for Factoid and List-type question answering task. The model is based on Fast QA and predicts the answer span in the passage for a given question. The model is trained on SQuAD data set and fine tuned on the BioASQ data. Dimitriadis et al. BIBREF5 proposed two stage process for Factoid question answering task. Their system uses general purpose tools such as Metamap, BeCas to identify candidate sentences. These candidate sentences are represented in the form of features, and are then ranked by the binary classifier. Classifier is trained on candidate sentences extracted from relevant questions, snippets and correct answers from BioASQ challenge. For factoid question answering task highest ‘MRR’ achieved in the 6th edition of BioASQ competition is ‘0.4325’. Our system is a neural network model based on contextual word embeddings BIBREF1 and achieved a ‘MRR’ score ‘0.6103’ in one of the test batches for Factoid Question Answering task."
],
"extractive_spans": [
"0.6103"
],
"free_form_answer": "",
"highlighted_evidence": [
"Our system is a neural network model based on contextual word embeddings BIBREF1 and achieved a ‘MRR’ score ‘0.6103’ in one of the test batches for Factoid Question Answering task."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
],
"nlp_background": [
"two",
"two",
"two",
"two"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"What was the baseline model?",
"What dataset did they use?",
"What was their highest recall score?",
"What was their highest MRR score?"
],
"question_id": [
"ff338921e34c15baf1eae0074938bf79ee65fdd2",
"e807d347742b2799bc347c0eff19b4c270449fee",
"31b92c03d5b9be96abcc1d588d10651703aff716",
"9ec1f88ceec84a10dc070ba70e90a792fba8ce71"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: BioBERT fine tuned for question answering task",
"Figure 2: A simple way of finding the lexical answer types, LATs, of factoid questions: using POS tags to find the question word (e.g. ’which’), and a dependency parse to find the LAT within the window of 3 words. If a noun is not found near the ”Wh-” word, we iterate looking for it, as in the second panel.",
"Figure 3: An example of a using BioBERT with additional features: Contextual word embedding for Lexical Answer Type (LAT) given as feature along with the actual contextual embeddings for the words in question and the paragraph. This change produced mixed results and no overall improvement.",
"Table 1: Factoid Questions. In Batch 3 we obtained the highest score. Also the relative distance between our best system and the top performing system shrunk between Batch 4 and 5.",
"Table 2: List Questions",
"Table 3: Yes/No Questions",
"Figure 4: Proposed extension. In addition to using BioBERT we propose to add a network that would train on embeddings of questions (Emb Quesi) and paragraphs (Emb Pari) with additional LAT features, or more broadly other semantic features, like relations that can be derived from text",
"Figure 5: Flow Chart for Yes/No Question answer processing",
"Figure 6: Dependency parse visualization for the example ’how’ question.",
"Figure 7: Dependency parse visualization for the example ’which’ question.",
"Figure 8: Dependency parse visualization for the question about the function of the protein Magt1."
],
"file": [
"4-Figure1-1.png",
"10-Figure2-1.png",
"12-Figure3-1.png",
"15-Table1-1.png",
"16-Table2-1.png",
"17-Table3-1.png",
"18-Figure4-1.png",
"24-Figure5-1.png",
"26-Figure6-1.png",
"26-Figure7-1.png",
"27-Figure8-1.png"
]
} | [
"What dataset did they use?",
"What was their highest MRR score?"
] | [
[
"2002.01984-Introduction-2"
],
[
"2002.01984-15-Table1-1.png",
"2002.01984-Related Work ::: BioAsq-0"
]
] | [
"A dataset provided by BioASQ consisting of questions, gold standard documents, snippets, concepts and ideal and ideal answers.",
"0.5115"
] | 146 |
1909.00326 | Towards Understanding Neural Machine Translation with Word Importance | Although neural machine translation (NMT) has advanced the state-of-the-art on various language pairs, the interpretability of NMT remains unsatisfactory. In this work, we propose to address this gap by focusing on understanding the input-output behavior of NMT models. Specifically, we measure the word importance by attributing the NMT output to every input word through a gradient-based method. We validate the approach on a couple of perturbation operations, language pairs, and model architectures, demonstrating its superiority on identifying input words with higher influence on translation performance. Encouragingly, the calculated importance can serve as indicators of input words that are under-translated by NMT models. Furthermore, our analysis reveals that words of certain syntactic categories have higher importance while the categories vary across language pairs, which can inspire better design principles of NMT architectures for multi-lingual translation. | {
"paragraphs": [
[
"Neural machine translation (NMT) has achieved the state-of-the-art results on a mass of language pairs with varying structural differences, such as English-French BIBREF0, BIBREF1 and Chinese-English BIBREF2. However, so far not much is known about how and why NMT works, which pose great challenges for debugging NMT models and designing optimal architectures.",
"The understanding of NMT models has been approached primarily from two complementary perspectives. The first thread of work aims to understand the importance of representations by analyzing the linguistic information embedded in representation vectors BIBREF3, BIBREF4 or hidden units BIBREF5, BIBREF6. Another direction focuses on understanding the importance of input words by interpreting the input-output behavior of NMT models. Previous work BIBREF7 treats NMT models as black-boxes and provides explanations that closely resemble the attention scores in NMT models. However, recent studies reveal that attention does not provide meaningful explanations since the relationship between attention scores and model output is unclear BIBREF8.",
"In this paper, we focus on the second thread and try to open the black-box by exploiting the gradients in NMT generation, which aims to estimate the word importance better. Specifically, we employ the integrated gradients method BIBREF9 to attribute the output to the input words with the integration of first-order derivatives. We justify the gradient-based approach via quantitative comparison with black-box methods on a couple of perturbation operations, several language pairs, and two representative model architectures, demonstrating its superiority on estimating word importance.",
"We analyze the linguistic behaviors of words with the importance and show its potential to improve NMT models. First, we leverage the word importance to identify input words that are under-translated by NMT models. Experimental results show that the gradient-based approach outperforms both the best black-box method and other comparative methods. Second, we analyze the linguistic roles of identified important words, and find that words of certain syntactic categories have higher importance while the categories vary across language. For example, nouns are more important for Chinese$\\Rightarrow $English translation, while prepositions are more important for English-French and -Japanese translation. This finding can inspire better design principles of NMT architectures for different language pairs. For instance, a better architecture for a given language pair should consider its own language characteristics."
],
[
"Our main contributions are:",
"Our study demonstrates the necessity and effectiveness of exploiting the intermediate gradients for estimating word importance.",
"We find that word importance is useful for understanding NMT by identifying under-translated words.",
"We provide empirical support for the design principle of NMT architectures: essential inductive bias (e.g., language characteristics) should be considered for model design."
],
[
"Interpretability of Seq2Seq models has recently been explored mainly from two perspectives: interpreting internal representations and understanding input-output behaviors. Most of the existing work focus on the former thread, which analyzes the linguistic information embeded in the learned representations BIBREF3, BIBREF4, BIBREF10 or the hidden units BIBREF6, BIBREF5. Several researchers turn to expose systematic differences between human and NMT translations BIBREF11, BIBREF12, indicating the linguistic properties worthy of investigating. However, the learned representations may depend on the model implementation, which potentially limit the applicability of these methods to a broader range of model architectures. Accordingly, we focus on understanding the input-output behaviors, and validate on different architectures to demonstrate the universality of our findings.",
"Concerning interpreting the input-output behavior, previous work generally treats Seq2Seq models as black-boxes BIBREF13, BIBREF7. For example, alvarez2017causal measure the relevance between two input-output tokens by perturbing the input sequence. However, they do not exploit any intermediate information such as gradients, and the relevance score only resembles attention scores. Recently, Jain2019AttentionIN show that attention scores are in weak correlation with the feature importance. Starting from this observation, we exploit the intermediate gradients to better estimate word importance, which consistently outperforms its attention counterpart across model architectures and language pairs."
],
[
"The intermediate gradients have proven to be useful in interpreting deep learning models, such as NLP models BIBREF14, BIBREF15 and computer vision models BIBREF16, BIBREF9. Among all gradient-based approaches, the integrated gradients BIBREF9 is appealing since it does not need any instrumentation of the architecture and can be computed easily by calling gradient operations. In this work, we employ the IG method to interpret NMT models and reveal several interesting findings, which can potentially help debug NMT models and design better architectures for specific language pairs."
],
[
"In machine translation task, a NMT model $F$: $\\textbf {x} \\rightarrow \\textbf {y}$ maximizes the probability of a target sequence $\\textbf {y} = \\lbrace y_1,...,y_N\\rbrace $ given a source sentence $\\textbf {x} = \\lbrace x_1,...,x_M\\rbrace $:",
"where $\\mathbf {\\theta }$ is the model parameter and $\\textbf {y}_{<n}$ is a partial translation. At each time step n, the model generates an output word of the highest probability based on the source sentence $\\textbf {x}$ and the partial translation $\\textbf {y}_{<n}$. The training objective is to minimize the negative log-likelihood loss on the training corpus. During the inference, beam search is employed to decode a more optimal translation. In this study, we investigate the contribution of each input word $x_m$ to the translated sentence ${\\bf y}$."
],
[
"In this work, the notion of “word importance” is employed to quantify the contribution that a word in the input sentence makes to the NMT generations. We categorize the methods of word importance estimation into two types: black-box methods without the knowledge of the model and white-box methods that have access to the model internal information (e.g., parameters and gradients). Previous studies mostly fall into the former type, and in this study, we investigate several representative black-box methods:",
"Content Words: In linguistics, all words can be categorized as either content or content-free words. Content words consist mostly of nouns, verbs, and adjectives, which carry descriptive meanings of the sentence and thereby are often considered as important.",
"Frequent Words: We rank the relative importance of input words according to their frequency in the training corpus. We do not consider the top 50 most frequent words since they are mostly punctuation and stop words.",
"Causal Model BIBREF7: Since the causal model is complicated to implement and its scores closely resemble attention scores in NMT models. In this study, we use Attention scores to simulate the causal model.",
"Our approach belongs to the white-box category by exploiting the intermediate gradients, which will be described in the next section."
],
[
"In this work, we resort to a gradient-based method, integrated gradients BIBREF9 (IG), which was originally proposed to attribute the model predictions to input features. It exploits the handy model gradient information by integrating first-order derivatives. IG is implementation invariant and does not require neural models to be differentiable or smooth, thereby is suitable for complex neural networks like Transformer. In this work, we use IG to estimate the word importance in an input sentence precisely.",
"Formally, let $\\textbf {x} = (x_1, ..., x_M)$ be the input sentence and $\\textbf {x}^{\\prime }$ be a baseline input. $F$ is a well-trained NMT model, and $F(\\textbf {x})_n$ is the model output (i.e., $P(y_n|\\textbf {y}_{<n},\\textbf {x})$) at time step $n$. Integrated gradients is then defined as the integral of gradients along the straightline path from the baseline $\\textbf {x}^{\\prime }$ to the input $\\textbf {x}$. In detail, the contribution of the $m^{th}$ word in $\\textbf {x}$ to the prediction of $F(\\textbf {x})_n$ is defined as follows.",
"where $\\frac{\\partial {F(\\textbf {x})_n}}{\\partial {\\textbf {x}_m}}$ is the gradient of $F(\\textbf {x})_n$ w.r.t. the embedding of the $m^{th}$ word. In this paper, as suggested, the baseline input $\\textbf {x}^{\\prime }$ is set as a sequence of zero embeddings that has the same sequence length $M$. In this way, we can compute the contribution of a specific input word to a designated output word. Since the above formula is intractable for deep neural models, we approximate it by summing the gradients along a multi-step path from baseline $\\textbf {x}^{\\prime }$ to the input x.",
"where $S$ denotes the number of steps that are uniformly distributed along the path. The IG will be more accurate if a larger S is used. In our preliminary experiments, we varied the steps and found 300 steps yielding fairly good performance.",
"Following the formula, we can calculate the contribution of every input word makes to every output word, forming a contribution matrix of size $M \\times N$, where $N$ is the output sentence length. Given the contribution matrix, we can obtain the word importance of each input word to the entire output sentence. To this end, for each input word, we first aggregate its contribution values to all output words by the sum operation, and then normalize all sums through the Softmax function. Figure FIGREF13 illustrates an example of the calculated word importance and the contribution matrix, where an English sentence is translated into a French sentence using the Transformer model. A negative contribution value indicates that the input word has negative effects on the output word."
],
[
"To make the conclusion convincing, we first choose two large-scale datasets that are publicly available, i.e., Chinese-English and English-French. Since English, French, and Chinese all belong to the subject-verb-object (SVO) family, we choose another very different subject-object-verb (SOV) language, Japanese, which might bring some interesting linguistic behaviors in English-Japanese translation.",
"For Chinese-English task, we use WMT17 Chinese-English dataset that consists of $20.6$M sentence pairs. For English-French task, we use WMT14 English-French dataset that comprises $35.5$M sentence pairs. For English-Japanese task, we follow BIBREF17 to use the first two sections of WAT17 English-Japanese dataset that consists of $1.9$M sentence pairs. Following the standard NMT procedure, we adopt the standard byte pair encoding (BPE) BIBREF18 with 32K merge operations for all language pairs. We believe that these datasets are large enough to confirm the rationality and validity of our experimental analyses."
],
[
"We choose the state-of-the-art Transformer BIBREF1 model and the conventional RNN-Search model BIBREF0 as our test bed. We implement the Attribution method based on the Fairseq-py BIBREF19 framework for the above models. All models are trained on the training corpus for 100k steps under the standard settings, which achieve comparable translation results. All the following experiments are conducted on the test dataset, and we estimate the input word importance using the model generated hypotheses.",
"In the following experiments, we compare IG (Attribution) with several black-box methods (i.e., Content, Frequency, Attention) as introduced in Section SECREF8. In Section SECREF21, to ensure that the translation performance decrease attributes to the selected words instead of the perturbation operations, we randomly select the same number of words to perturb (Random), which serves as a baseline. Since there is no ranking for content words, we randomly select a set of content words as important words. To avoid the potential bias introduced by randomness (i.e., Random and Content), we repeat the experiments for 10 times and report the averaged results. We calculate the Attention importance in a similar manner as the Attribution, except that the attention scores use a max operation due to the better performance."
],
[
"We evaluate the effectiveness of estimating word importance by the translation performance decrease. More specifically, unlike the usual way, we measure the decrease of translation performance when perturbing a set of important words that are of top-most word importance in a sentence. The more translation performance degrades, the more important the word is.",
"We use the standard BLEU score as the evaluation metric for translation performance. To make the conclusion more convincing, we conduct experiments on different types of synthetic perturbations (Section SECREF21), as well as different NMT architectures and language pairs (Section SECREF27). In addition, we compare with a supervised erasure method, which requires ground-truth translations for scoring word importance (Section SECREF30)."
],
[
"In this experiment, we investigate the effectiveness of word importance estimation methods under different synthetic perturbations. Since the perturbation on text is notoriously hard BIBREF20 due to the semantic shifting problem, in this experiment, we investigate three types of perturbations to avoid the potential bias :",
"Deletion perturbation removes the selected words from the input sentence, and it can be regarded as a specific instantiation of sentence compression BIBREF21.",
"Mask perturbation replaces embedding vectors of the selected words with all-zero vectors BIBREF22, which is similar to Deletion perturbation except that it retains the placeholder.",
"Grammatical Replacement perturbation replaces a word by another word of the same linguistic role (i.e., POS tags), yielding a sentence that is grammatically correct but semantically nonsensical BIBREF23, BIBREF24, such as “colorless green ideas sleep furiously”.",
"Figure FIGREF19 illustrates the experimental results on Chinese$\\Rightarrow $English translation with Transformer. It shows that Attribution method consistently outperforms other methods against different perturbations on a various number of operations. Here the operation number denotes the number of perturbed words in a sentence. Specifically, we can make the following observations."
],
[
"Under three different perturbations, perturbing words of top-most importance leads to lower BLEU scores than Random selected words. It confirms the existence of important words, which have greater impacts on translation performance. Furthermore, perturbing important words identified by Attribution outperforms the Random method by a large margin (more than 4.0 BLEU under 5 operations)."
],
[
"Figure FIGREF19 shows that two black-box methods (i.e., Content, Frequency) perform only slightly better than the Random method. Specifically, the Frequency method demonstrates even worse performances under the Mask perturbation. Therefore, linguistic properties (such as POS tags) and the word frequency can only partially help identify the important words, but it is not as accurate as we thought. In the meanwhile, it is intriguing to explore what exact linguistic characteristics these important words reveal, which will be introduced in Section SECREF5.",
"We also evaluate the Attention method, which bases on the encoder-decoder attention scores at the last layer of Transformer. Note that the Attention method is also used to simulate the best black-box method SOCRAT, and the results show that it is more effective than black-box methods and the Random baseline. Given the powerful Attention method, Attribution method still achieves best performances under all three perturbations. Furthermore, we find that the gap between Attribution and Attention is notably large (around $1.0+$ BLEU difference). Attention method does not provide as accurate word importance as the Attribution, which exhibits the superiority of gradient-based methods and consists with the conclusion reported in the previous study BIBREF8.",
"In addition, as shown in Figure FIGREF19, the perturbation effectiveness of Deletion, Mask, and Grammatical Replacement varies from strong to weak. In the following experiments, we choose Mask as the representative perturbation operation for its moderate perturbation performance, based on which we compare two most effective methods Attribution and Attention."
],
[
"We validate the effectiveness of the proposed approach using a different NMT architecture RNN-Search on the Chinese$\\Rightarrow $English translation task. The results are shown in Figure FIGREF20(a). We observe that the Attribution method still outperforms both Attention method and Random method by a decent margin. By comparing to Transformer, the results also reveal that the RNN-Search model is less robust to these perturbations. To be specific, under the setting of five operations and Attribution method, Transformer shows a relative decrease of $55\\%$ on BLEU scores while the decline of RNN-Search model is $64\\%$."
],
[
"We further conduct experiments on another two language pairs (i.e., English$\\Rightarrow $French, English$\\Rightarrow $Japanese in Figures FIGREF20(b, c)) as well as the reverse directions (Figures FIGREF20(d, e, f)) using Transformer under the Mask perturbation. In all the cases, Attribution shows the best performance while Random achieves the worst result. More specifically, Attribution method shows similar translation quality degradation on all three language-pairs, which declines to around the half of the original BLEU score with five operations."
],
[
"There exists another straightforward method, Erasure BIBREF7, BIBREF22, BIBREF25, which directly evaluates the word importance by measuring the translation performance degradation of each word. Specifically, it erases (i.e., Mask) one word from the input sentence each time and uses the BLEU score changes to denote the word importance (after normalization).",
"In Figure FIGREF31, we compare Erasure method with Attribution method under the Mask perturbation. The results show that Attribution method is less effective than Erasure method when only one word is perturbed. But it outperforms the Erasure method when perturbing 2 or more words. The results reveal that the importance calculated by erasing only one word cannot be generalized to multiple-words scenarios very well. Besides, the Erasure method is a supervised method which requires ground-truth references, and finding a better words combination is computation infeasible when erasing multiple words.",
"We close this section by pointing out that our gradient-based method consistently outperforms its black-box counterparts in various settings, demonstrating the effectiveness and universality of exploiting gradients for estimating word importance. In addition, our approach is on par with or even outperforms the supervised erasure method (on multiple-word perturbations). This is encouraging since our approach does not require any external resource and is fully unsupervised."
],
[
"In this section, we conduct analyses on two potential usages of word importance, which can help debug NMT models (Section SECREF33) and design better architectures for specific languages (Section SECREF37). Due to the space limitation, we only analyze the results of Chinese$\\Rightarrow $English, English$\\Rightarrow $French, and English$\\Rightarrow $Japanese. We list the results on the reverse directions in Appendix, in which the general conclusions also hold."
],
[
"In this experiment, we propose to use the estimated word importance to detect the under-translated words by NMT models. Intuitively, under-translated input words should contribute little to the NMT outputs, yielding much smaller word importance. Given 500 Chinese$\\Rightarrow $English sentence pairs translated by the Transformer model (BLEU 23.57), we ask ten human annotators to manually label the under-translated input words, and at least two annotators label each input-hypothesis pair. These annotators have at least six years of English study experience, whose native language is Chinese. Among these sentences, 178 sentences have under-translation errors with 553 under-translated words in total.",
"Table TABREF32 lists the accuracy of detecting under-translation errors by comparing words of least importance and human-annotated under-translated words. As seen, our Attribution method consistently and significantly outperforms both Erasure and Attention approaches. By exploiting the word importance calculated by Attribution method, we can identify the under-translation errors automatically without the involvement of human interpreters. Although the accuracy is not high, it is worth noting that our under-translation method is very simple and straightforward. This is potentially useful for debugging NMT models, e.g., automatic post-editing with constraint decoding BIBREF26, BIBREF27."
],
[
"In this section, we analyze the linguistic characteristics of important words identified by the attribution-based approach. Specifically, we investigate several representative sets of linguistic properties, including POS tags, and fertility, and depth in a syntactic parse tree. In these analyses, we multiply the word importance with the corresponding sentence length for fair comparison. We use a decision tree based regression model to calculate the correlation between the importance and linguistic properties.",
"Table TABREF34 lists the correlations, where a higher value indicates a stronger correlation. We find that the syntactic information is almost independent of the word importance value. Instead, the word importance strongly correlates with the POS tags and fertility features, and these features in total contribute over 95%. Therefore, in the following analyses, we mainly focus on the POS tags (Table TABREF35) and fertility properties (Table TABREF36). For better illustration, we calculate the distribution over the linguistic property based on both the Attribution importance (“Attr.”) and the word frequency (“Count”) inside a sentence. The larger the relative increase between these two values, the more important the linguistic property is."
],
[
"As shown in Table TABREF35, content words are more important on Chinese$\\Rightarrow $English but content-free words are more important on English$\\Rightarrow $Japanese. On English$\\Rightarrow $French, there is no notable increase or decrease of the distribution since English and French are in essence very similar. We also obtain some specific findings of great interest. For example, we find that noun is more important on Chinese$\\Rightarrow $English translation, while preposition is more important on English$\\Rightarrow $French translation. More interestingly, English$\\Rightarrow $Japanese translation shows a substantial discrepancy in contrast to the other two language pairs. The results reveal that preposition and punctuation are very important in English$\\Rightarrow $Japanese translation, which is counter-intuitive.",
"Punctuation in NMT is understudied since it carries little information and often does not affect the understanding of a sentence. However, we find that punctuation is important on English$\\Rightarrow $Japanese translation, whose proportion increases dramatically. We conjecture that it is because the punctuation could affect the sense groups in a sentence, which further benefits the syntactic reordering in Japanese."
],
[
"We further compare the fertility distribution based on word importance and the word frequency on three language pairs. We hypothesize that a source word that corresponds to multiple target words should be more important since it contributes more to both sentence length and BLEU score.",
"Table TABREF36 lists the results. Overall speaking, one-to-many fertility is consistently more important on all three language pairs, which confirms our hypothesis. On the contrary, null-aligned words receive much less attention, which shows a persistently decrease on three language pairs. It is also reasonable since null-aligned input words contribute almost nothing to the translation outputs."
],
[
"We approach understanding NMT by investigating the word importance via a gradient-based method, which bridges the gap between word importance and translation performance. Empirical results show that the gradient-based method is superior to several black-box methods in estimating the word importance. Further analyses show that important words are of distinct syntactic categories on different language pairs, which might support the viewpoint that essential inductive bias should be introduced into the model design BIBREF28. Our study also suggests the possibility of detecting the notorious under-translation problem via the gradient-based method.",
"This paper is an initiating step towards the general understanding of NMT models, which may bring some potential improvements, such as",
"Interactive MT and Constraint Decoding BIBREF29, BIBREF26: The model pays more attention to the detected unimportant words, which are possibly under-translated;",
"Adaptive Input Embedding BIBREF30: We can extend the adaptive softmax BIBREF31 to the input embedding of variable capacity – more important words are assigned with more capacity;",
"NMT Architecture Design: The language-specific inductive bias (e.g., different behaviors on POS) should be incorporated into the model design.",
"We can also explore other applications of word importance to improve NMT models, such as more tailored training methods. In general, model interpretability can build trust in model predictions, help error diagnosis and facilitate model refinement. We expect our work could shed light on the NMT model understanding and benefit the model improvement.",
"There are many possible ways to implement the general idea of exploiting gradients for model interpretation. The aim of this paper is not to explore this whole space but simply to show that some fairly straightforward implementations work well. Our approach can benefit from advanced exploitation of the gradients or other useful intermediate information, which we leave to the future work."
],
[
"Shilin He and Michael R. Lyu were supported by the Research Grants Council of the Hong Kong Special Administrative Region, China (No. CUHK 14210717 of the General Research Fund), and Microsoft Research Asia (2018 Microsoft Research Asia Collaborative Research Award). We thank the anonymous reviewers for their insightful comments and suggestions."
],
[
"2",
"We analyze the distribution of syntactic categories and word fertility on the same language pairs with reverse directions, i.e., English$\\Rightarrow $Chinese, French$\\Rightarrow $English, and Japanese$\\Rightarrow $English. The results are shown in Table TABREF43 and Table TABREF44 respectively, where we observe similar findings as before. We use the Stanford POS tagger to parse the English and French input sentences, and use the Kytea to parse the Japanese input sentences."
],
[
"On English$\\Rightarrow $Chinese, content words are more important than content-free words, while the situation is reversed on both French$\\Rightarrow $English and Japanese$\\Rightarrow $English translations. Since there is no clear boundary between Preposition/Determiner and other categories in Japanese, we set both categories to be none. Similarly, Punctuation is more important on Japanese$\\Rightarrow $English, which is in line with the finding on English$\\Rightarrow $Japanese. Overall speaking, it might indicate that the Syntactic distribution with word importance is language-pair related instead of the direction."
],
[
"The word fertility also shows similar trend as the previously reported results, where one-to-many fertility is more important and null-aligned fertility is less important. Interestingly, many-to-one fertility shows an increasing trend on Japanese$\\Rightarrow $English translation, but the proportion is relatively small.",
"In summary, the findings on language pairs with reverse directions still agree with the findings in the paper, which further confirms the generality of our experimental findings."
]
],
"section_name": [
"Introduction",
"Introduction ::: Contributions",
"Related Work ::: Interpreting Seq2Seq Models",
"Related Work ::: Exploiting Gradients for Model Interpretation",
"Approach ::: Neural Machine Translation",
"Approach ::: Word Importance",
"Approach ::: Integrated Gradients",
"Experiment ::: Data",
"Experiment ::: Implementation",
"Experiment ::: Evaluation",
"Experiment ::: Results on Different Perturbations",
"Experiment ::: Results on Different Perturbations ::: Important words are more influential on translation performance than the others.",
"Experiment ::: Results on Different Perturbations ::: The gradient-based method is superior to comparative methods (e.g., Attention) in estimating word importance.",
"Experiment ::: Results on Different NMT Architecture and Language Pairs ::: Different NMT Architecture",
"Experiment ::: Results on Different NMT Architecture and Language Pairs ::: Different Language Pairs and Directions",
"Experiment ::: Comparison with Supervised Erasure",
"Analysis",
"Analysis ::: Effect on Detecting Translation Errors",
"Analysis ::: Analysis on Linguistic Properties",
"Analysis ::: Analysis on Linguistic Properties ::: Certain syntactic categories have higher importance while the categories vary across language pairs.",
"Analysis ::: Analysis on Linguistic Properties ::: Words of high fertility are always important.",
"Discussion and Conclusion",
"Acknowledgement",
"Analyses on Reverse Directions",
"Analyses on Reverse Directions ::: Syntactic Categories",
"Analyses on Reverse Directions ::: Word Fertility"
]
} | {
"answers": [
{
"annotation_id": [
"488ea0905a51162465250dbd12e94b246b326971"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"0cd85984c76a15126dda3dacc4cf3aa3fff47545",
"95456473d92e7d6e1963f3dbba7bf2f336a34f17"
],
"answer": [
{
"evidence": [
"In this experiment, we propose to use the estimated word importance to detect the under-translated words by NMT models. Intuitively, under-translated input words should contribute little to the NMT outputs, yielding much smaller word importance. Given 500 Chinese$\\Rightarrow $English sentence pairs translated by the Transformer model (BLEU 23.57), we ask ten human annotators to manually label the under-translated input words, and at least two annotators label each input-hypothesis pair. These annotators have at least six years of English study experience, whose native language is Chinese. Among these sentences, 178 sentences have under-translation errors with 553 under-translated words in total.",
"Table TABREF32 lists the accuracy of detecting under-translation errors by comparing words of least importance and human-annotated under-translated words. As seen, our Attribution method consistently and significantly outperforms both Erasure and Attention approaches. By exploiting the word importance calculated by Attribution method, we can identify the under-translation errors automatically without the involvement of human interpreters. Although the accuracy is not high, it is worth noting that our under-translation method is very simple and straightforward. This is potentially useful for debugging NMT models, e.g., automatic post-editing with constraint decoding BIBREF26, BIBREF27."
],
"extractive_spans": [],
"free_form_answer": "They measured the under-translated words with low word importance score as calculated by Attribution.\nmethod",
"highlighted_evidence": [
"Intuitively, under-translated input words should contribute little to the NMT outputs, yielding much smaller word importance. ",
"By exploiting the word importance calculated by Attribution method, we can identify the under-translation errors automatically without the involvement of human interpreters. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In this experiment, we propose to use the estimated word importance to detect the under-translated words by NMT models. Intuitively, under-translated input words should contribute little to the NMT outputs, yielding much smaller word importance. Given 500 Chinese$\\Rightarrow $English sentence pairs translated by the Transformer model (BLEU 23.57), we ask ten human annotators to manually label the under-translated input words, and at least two annotators label each input-hypothesis pair. These annotators have at least six years of English study experience, whose native language is Chinese. Among these sentences, 178 sentences have under-translation errors with 553 under-translated words in total."
],
"extractive_spans": [
"we ask ten human annotators to manually label the under-translated input words, and at least two annotators label each input-hypothesis pair"
],
"free_form_answer": "",
"highlighted_evidence": [
" Given 500 Chinese$\\Rightarrow $English sentence pairs translated by the Transformer model (BLEU 23.57), we ask ten human annotators to manually label the under-translated input words, and at least two annotators label each input-hypothesis pair. These annotators have at least six years of English study experience, whose native language is Chinese. Among these sentences, 178 sentences have under-translation errors with 553 under-translated words in total."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"322e7e971f3569ac578b9c6e653b5ce61ad0e1a0",
"abba958cfd517a62ad4ff667c8715dc14eeece5e"
],
"answer": [
{
"evidence": [
"Following the formula, we can calculate the contribution of every input word makes to every output word, forming a contribution matrix of size $M \\times N$, where $N$ is the output sentence length. Given the contribution matrix, we can obtain the word importance of each input word to the entire output sentence. To this end, for each input word, we first aggregate its contribution values to all output words by the sum operation, and then normalize all sums through the Softmax function. Figure FIGREF13 illustrates an example of the calculated word importance and the contribution matrix, where an English sentence is translated into a French sentence using the Transformer model. A negative contribution value indicates that the input word has negative effects on the output word."
],
"extractive_spans": [
"Given the contribution matrix, we can obtain the word importance of each input word to the entire output sentence. "
],
"free_form_answer": "",
"highlighted_evidence": [
"Following the formula, we can calculate the contribution of every input word makes to every output word, forming a contribution matrix of size $M \\times N$, where $N$ is the output sentence length. Given the contribution matrix, we can obtain the word importance of each input word to the entire output sentence. To this end, for each input word, we first aggregate its contribution values to all output words by the sum operation, and then normalize all sums through the Softmax function."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Formally, let $\\textbf {x} = (x_1, ..., x_M)$ be the input sentence and $\\textbf {x}^{\\prime }$ be a baseline input. $F$ is a well-trained NMT model, and $F(\\textbf {x})_n$ is the model output (i.e., $P(y_n|\\textbf {y}_{<n},\\textbf {x})$) at time step $n$. Integrated gradients is then defined as the integral of gradients along the straightline path from the baseline $\\textbf {x}^{\\prime }$ to the input $\\textbf {x}$. In detail, the contribution of the $m^{th}$ word in $\\textbf {x}$ to the prediction of $F(\\textbf {x})_n$ is defined as follows.",
"where $\\frac{\\partial {F(\\textbf {x})_n}}{\\partial {\\textbf {x}_m}}$ is the gradient of $F(\\textbf {x})_n$ w.r.t. the embedding of the $m^{th}$ word. In this paper, as suggested, the baseline input $\\textbf {x}^{\\prime }$ is set as a sequence of zero embeddings that has the same sequence length $M$. In this way, we can compute the contribution of a specific input word to a designated output word. Since the above formula is intractable for deep neural models, we approximate it by summing the gradients along a multi-step path from baseline $\\textbf {x}^{\\prime }$ to the input x."
],
"extractive_spans": [],
"free_form_answer": "They compute the gradient of the output at each time step with respect to the input words to decide the importance.",
"highlighted_evidence": [
"Formally, let $\\textbf {x} = (x_1, ..., x_M)$ be the input sentence and $\\textbf {x}^{\\prime }$ be a baseline input. $F$ is a well-trained NMT model, and $F(\\textbf {x})_n$ is the model output (i.e., $P(y_n|\\textbf {y}_{\n\nwhere $\\frac{\\partial {F(\\textbf {x})_n}}{\\partial {\\textbf {x}_m}}$ is the gradient of $F(\\textbf {x})_n$ w.r.t. the embedding of the $m^{th}$ word. ",
"In this way, we can compute the contribution of a specific input word to a designated output word."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"0ba74186f383deaae10bc99739a1fcba2defcfe8",
"d742a74b99f81aba369418fa68917344da5bee58"
],
"answer": [
{
"evidence": [
"We choose the state-of-the-art Transformer BIBREF1 model and the conventional RNN-Search model BIBREF0 as our test bed. We implement the Attribution method based on the Fairseq-py BIBREF19 framework for the above models. All models are trained on the training corpus for 100k steps under the standard settings, which achieve comparable translation results. All the following experiments are conducted on the test dataset, and we estimate the input word importance using the model generated hypotheses."
],
"extractive_spans": [
" Transformer BIBREF1 model and the conventional RNN-Search model BIBREF0"
],
"free_form_answer": "",
"highlighted_evidence": [
"We choose the state-of-the-art Transformer BIBREF1 model and the conventional RNN-Search model BIBREF0 as our test bed."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We choose the state-of-the-art Transformer BIBREF1 model and the conventional RNN-Search model BIBREF0 as our test bed. We implement the Attribution method based on the Fairseq-py BIBREF19 framework for the above models. All models are trained on the training corpus for 100k steps under the standard settings, which achieve comparable translation results. All the following experiments are conducted on the test dataset, and we estimate the input word importance using the model generated hypotheses."
],
"extractive_spans": [
"Transformer",
"RNN-Search model"
],
"free_form_answer": "",
"highlighted_evidence": [
"We choose the state-of-the-art Transformer BIBREF1 model and the conventional RNN-Search model BIBREF0 as our test bed. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"Does their model suffer exhibit performance drops when incorporating word importance?",
"How do they measure which words are under-translated by NMT models?",
"How do their models decide how much improtance to give to the output words?",
"Which model architectures do they test their word importance approach on?"
],
"question_id": [
"384bf1f55c34b36cb03f916f50bbefade6c86a75",
"aef607d2ac46024be17b1ddd0ed3f13378c563a6",
"93beae291b455e5d3ecea6ac73b83632a3ae7ec7",
"6c91d44d5334a4ac80100eead4e105d34e99a284"
],
"question_writer": [
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 1: An example of (a) word importance and (b) contribution matrix calculated by Attribution (integrated gradients) on English⇒French translation task. Input in English: “It has always taken place .” Output in French: “Elle a toujours eu lieu .”",
"Figure 2: Effect of three types of synthetic perturbations on Chinese⇒English translation using the Transformer.",
"Figure 3: Effect of the Mask perturbation on (a) Chinese⇒English translation using the RNN-Search model, (b, c, d, e, f) other language pairs and directions using Transformer model.",
"Figure 4: Effect of Attribution and Erasure methods on Chinese⇒English translation with Mask perturbation.",
"Table 1: F1 accuracy of detecting under-translation errors with the estimated word importance.",
"Table 2: Correlation between Attribution word importance with POS tags, Fertility, and Syntactic Depth. Fertility can be categorized into 4 types: one-to-many (“≥ 2”), one-to-one (“1”), many-to-one (“(0, 1)”), and null-aligned (“0”). Syntactic depth shows the depth of a word in the dependency tree. A lower tree depth indicates closer to the root node in the dependency tree, which might indicate a more important word.",
"Table 3: Distribution of syntactic categories (e.g. content words vs. content-free words) based on word count (“Count”) and Attribution importance (“Attri.”). “4” denotes relative change over the count-based distribution.",
"Table 4: Distributions of word fertility and their relative change based on Attribution importance and word count.",
"Table 5: Distribution of syntactic categories with reverse directions based on word count (“Count”) and Attribution importance (“Attri.”). “4” denotes relative change over the count-based distribution.",
"Table 6: Distributions of word fertility and relative changes with reverse directions."
],
"file": [
"3-Figure1-1.png",
"5-Figure2-1.png",
"6-Figure3-1.png",
"6-Figure4-1.png",
"7-Table1-1.png",
"7-Table2-1.png",
"8-Table3-1.png",
"8-Table4-1.png",
"11-Table5-1.png",
"11-Table6-1.png"
]
} | [
"How do they measure which words are under-translated by NMT models?",
"How do their models decide how much improtance to give to the output words?"
] | [
[
"1909.00326-Analysis ::: Effect on Detecting Translation Errors-0",
"1909.00326-Analysis ::: Effect on Detecting Translation Errors-1"
],
[
"1909.00326-Approach ::: Integrated Gradients-2",
"1909.00326-Approach ::: Integrated Gradients-4",
"1909.00326-Approach ::: Integrated Gradients-1"
]
] | [
"They measured the under-translated words with low word importance score as calculated by Attribution.\nmethod",
"They compute the gradient of the output at each time step with respect to the input words to decide the importance."
] | 147 |
1903.03530 | Fast Prototyping a Dialogue Comprehension System for Nurse-Patient Conversations on Symptom Monitoring | Data for human-human spoken dialogues for research and development are currently very limited in quantity, variety, and sources; such data are even scarcer in healthcare. In this work, we investigate fast prototyping of a dialogue comprehension system by leveraging on minimal nurse-to-patient conversations. We propose a framework inspired by nurse-initiated clinical symptom monitoring conversations to construct a simulated human-human dialogue dataset, embodying linguistic characteristics of spoken interactions like thinking aloud, self-contradiction, and topic drift. We then adopt an established bidirectional attention pointer network on this simulated dataset, achieving more than 80% F1 score on a held-out test set from real-world nurse-to-patient conversations. The ability to automatically comprehend conversations in the healthcare domain by exploiting only limited data has implications for improving clinical workflows through red flag symptom detection and triaging capabilities. We demonstrate the feasibility for efficient and effective extraction, retrieval and comprehension of symptom checking information discussed in multi-turn human-human spoken conversations. | {
"paragraphs": [
[
"Spoken conversations still remain the most natural and effortless means of human communication. Thus a lot of valuable information is conveyed and exchanged in such an unstructured form. In telehealth settings, nurses might call discharged patients who have returned home to continue to monitor their health status. Human language technology that can efficiently and effectively extract key information from such conversations is clinically useful, as it can help streamline workflow processes and digitally document patient medical information to increase staff productivity. In this work, we design and prototype a dialogue comprehension system in the question-answering manner, which is able to comprehend spoken conversations between nurses and patients to extract clinical information."
],
[
"Machine comprehension of written passages has made tremendous progress recently. Large quantities of supervised training data for reading comprehension (e.g. SQuAD BIBREF0 ), the wide adoption and intense experimentation of neural modeling BIBREF1 , BIBREF2 , and the advancements in vector representations of word embeddings BIBREF3 , BIBREF4 all contribute significantly to the achievements obtained so far. The first factor, the availability of large scale datasets, empowers the latter two factors. To date, there is still very limited well-annotated large-scale data suitable for modeling human-human spoken dialogues. Therefore, it is not straightforward to directly port over the recent endeavors in reading comprehension to dialogue comprehension tasks.",
"In healthcare, conversation data is even scarcer due to privacy issues. Crowd-sourcing is an efficient way to annotate large quantities of data, but less suitable for healthcare scenarios, where domain knowledge is required to guarantee data quality. To demonstrate the feasibility of a dialogue comprehension system used for extracting key clinical information from symptom monitoring conversations, we developed a framework to construct a simulated human-human dialogue dataset to bootstrap such a prototype. Similar efforts have been conducted for human-machine dialogues for restaurant or movie reservations BIBREF5 . To the best of our knowledge, no one to date has done so for human-human conversations in healthcare."
],
[
"Human-human spoken conversations are a dynamic and interactive flow of information exchange. While developing technology to comprehend such spoken conversations presents similar technical challenges as machine comprehension of written passages BIBREF6 , the challenges are further complicated by the interactive nature of human-human spoken conversations:",
"(1) Zero anaphora is more common: Co-reference resolution of spoken utterances from multiple speakers is needed. For example, in Figure FIGREF5 (a) headaches, the pain, it, head bulging all refer to the patient's headache symptom, but they were uttered by different speakers and across multiple utterances and turns. In addition, anaphors are more likely to be omitted (see Figure FIGREF5 (a) A4) as this does not affect the human listener’s understanding, but it might be challenging for computational models.",
"(2) Thinking aloud more commonly occurs: Since it is more effortless to speak than to type, one is more likely to reveal her running thoughts when talking. In addition, one cannot retract what has been uttered, while in text communications, one is more likely to confirm the accuracy of the information in a written response and revise if necessary before sending it out. Thinking aloud can lead to self-contradiction, requiring more context to fully understand the dialogue; e.g., in A6 in Figure FIGREF5 (a), the patient at first says he has none of the symptoms asked, but later revises his response saying that he does get dizzy after running.",
"(3) Topic drift is more common and harder to detect in spoken conversations: An example is shown in Figure FIGREF5 (a) in A3, where No is actually referring to cough in the previous question, and then the topic is shifted to headache. In spoken conversations, utterances are often incomplete sentences so traditional linguistic features used in written passages such as punctuation marks indicating syntactic boundaries or conjunction words suggesting discourse relations might no longer exist."
],
[
"Figure FIGREF5 (b) illustrates the proposed dialogue comprehension task using a question answering (QA) model. The input are a multi-turn symptom checking dialogue INLINEFORM0 and a query INLINEFORM1 specifying a symptom with one of its attributes; the output is the extracted answer INLINEFORM2 from the given dialogue. A training or test sample is defined as INLINEFORM3 . Five attributes, specifying certain details of clinical significance, are defined to characterize the answer types of INLINEFORM4 : (1) the time the patient has been experiencing the symptom, (2) activities that trigger the symptom (to occur or worsen), (3) the extent of seriousness, (4) the frequency occurrence of the symptom, and (5) the location of symptom. For each symptom/attribute, it can take on different linguistic expressions, defined as entities. Note that if the queried symptom or attribute is not mentioned in the dialogue, the groundtruth output is “No Answer”, as in BIBREF6 ."
],
[
"Large-scale reading comprehension tasks like SQuAD BIBREF0 and MARCO BIBREF7 provide question-answer pairs from a vast range of written passages, covering different kinds of factual answers involving entities such as location and numerical values. Furthermore, HotpotQA BIBREF8 requires multi-step inference and provides numerous answer types. CoQA BIBREF9 and QuAC BIBREF10 are designed to mimic multi-turn information-seeking discussions of the given material. In these tasks, contextual reasoning like coreference resolution is necessary to grasp rich linguistic patterns, encouraging semantic modeling beyond naive lexical matching. Neural networks contribute to impressive progress in semantic modeling: distributional semantic word embeddings BIBREF3 , contextual sequence encoding BIBREF11 , BIBREF12 and the attention mechanism BIBREF13 , BIBREF14 are widely adopted in state-of-the-art comprehension models BIBREF1 , BIBREF2 , BIBREF4 .",
"While language understanding tasks in dialogue such as domain identification BIBREF15 , slot filling BIBREF16 and user intent detection BIBREF17 have attracted much research interest, work in dialogue comprehension is still limited, if any. It is labor-intensive and time-consuming to obtain a critical mass of annotated conversation data for computational modeling. Some propose to collect text data from human-machine or machine-machine dialogues BIBREF18 , BIBREF5 . In such cases, as human speakers are aware of current limitations of dialogue systems or due to pre-defined assumptions of user simulators, there are fewer cases of zero anaphora, thinking aloud, and topic drift, which occur more often in human-human spoken interactions."
],
[
"There is emerging interest in research and development activities at the intersection of machine learning and healthcare , of which much of the NLP related work are centered around social media or online forums (e.g., BIBREF19 , BIBREF20 ), partially due to the world wide web as a readily available source of information. Other work in this area uses public data sources such as MIMIC in electronic health records: text classification approaches have been applied to analyze unstructured clinical notes for ICD code assignment BIBREF21 and automatic intensive emergency prediction BIBREF22 . Sequence-to-sequence textual generation has been used for readable notes based on medical and demographic recordings BIBREF23 . For mental health, there has been more focus on analyzing dialogues. For example, sequential modeling of audio and text have helped detect depression from human-machine interviews BIBREF24 . However, few studies have examined human-human spoken conversations in healthcare settings."
],
[
"We used recordings of nurse-initiated telephone conversations for congestive heart failure patients undergoing telemonitoring, post-discharge from the hospital. The clinical data was acquired by the Health Management Unit at Changi General Hospital. This research study was approved by the SingHealth Centralised Institutional Review Board (Protocol 1556561515). The patients were recruited during 2014-2016 as part of their routine care delivery, and enrolled into the telemonitoring health management program with consent for use of anonymized versions of their data for research.",
"The dataset comprises a total of 353 conversations from 40 speakers (11 nurses, 16 patients, and 13 caregivers) with consent to the use of anonymized data for research. The speakers are 38 to 88 years old, equally distributed across gender, and comprise a range of ethnic groups (55% Chinese, 17% Malay, 14% Indian, 3% Eurasian, and 11% unspecified). The conversations cover 11 topics (e.g., medication compliance, symptom checking, education, greeting) and 9 symptoms (e.g., chest pain, cough) and amount to 41 hours.",
"Data preprocessing and anonymization were performed by a data preparation team, separate from the data analysis team to maintain data confidentiality. The data preparation team followed standard speech recognition transcription guidelines, where words are transcribed verbatim to include false starts, disfluencies, mispronunciations, and private self-talk. Confidential information were marked and clipped off from the audio and transcribed with predefined tags in the annotation. Conversation topics and clinical symptoms were also annotated and clinically validated by certified telehealth nurses."
],
[
"To analyze the linguistic structure of the inquiry-response pairs in the entire 41-hour dataset, we randomly sampled a seed dataset consisting of 1,200 turns and manually categorized them to different types, which are summarized in Table TABREF14 along with the corresponding occurrence frequency statistics. Note that each given utterance could be categorized to more than one type. We elaborate on each utterance type below.",
"Open-ended Inquiry: Inquiries about general well-being or a particular symptom; e.g., “How are you feeling?” and “Do you cough?”",
"Detailed Inquiry: Inquiries with specific details that prompt yes/no answers or clarifications; e.g., “Do you cough at night?”",
"Multi-Intent Inquiry: Inquiring more than one symptom in a question; e.g., “Any cough, chest pain, or headache?”",
"Reconfirmation Inquiry: The nurse reconfirms particular details; e.g., “Really? At night?” and “Serious or mild?”. This case is usually related to explicit or implicit coreferencing.",
"Inquiry with Transitional Clauses: During spoken conversations, one might repeat what the other party said, but it is unrelated to the main clause of the question. This is usually due to private self-talk while thinking aloud, and such utterances form a transitional clause before the speaker starts a new topic; e.g., “Chest pain... no chest pain, I see... any cough?”.",
"Yes/No Response: Yes/No responses seem straightforward, but sometimes lead to misunderstanding if one does not interpret the context appropriately. One case is tag questions: A:“You don't cough at night, do you?” B:`Yes, yes” A:“cough at night?” B:“No, no cough”. Usually when the answer is unclear, clarifying inquiries will be asked for reconfirmation purposes.",
"Detailed Response: Responses that contain specific information of one symptom, like “I felt tightness in my chest”.",
"Response with Revision: Revision is infrequent but can affect comprehension significantly. One cause is thinking aloud so a later response overrules the previous one; e.g., “No dizziness, oh wait... last week I felt a bit dizzy when biking”.",
"Response with Topic Drift: When a symptom/topic like headache is inquired, the response might be: “Only some chest pain at night”, not referring to the original symptom (headache) at all.",
"Response with Transitional Clauses: Repeating some of the previous content, but often unrelated to critical clinical information and usually followed by topic drift. For example, “Swelling... swelling... I don't cough at night”."
],
[
"We divide the construction of data simulation into two stages. In Section SECREF16 , we build templates and expression pools using linguistic analysis followed by manual verification. In Section SECREF20 , we present our proposed framework for generating simulated training data. The templates and framework are verified for logical correctness and clinical soundness."
],
[
"Each utterance in the seed data is categorized according to Table TABREF14 and then abstracted into templates by replacing entity phrases like cough and often with respective placeholders “#symptom#” and “#frequency#”. The templates are refined through verifying logical correctness and injecting expression diversity by linguistically trained researchers. As these replacements do not alter the syntactic structure, we interchange such placeholders with various verbal expressions to enlarge the simulated training set in Section SECREF20 . Clinical validation was also conducted by certified telehealth nurses.",
"For the 9 symptoms (e.g. chest pain, cough) and 5 attributes (e.g., extent, frequency), we collect various expressions from the seed data, and expand them through synonym replacement. Some attributes are unique to a particular symptom; e.g., “left leg” in #location# is only suitable to describe the symptom swelling, but not the symptom headache. Therefore, we only reuse general expressions like “slight” in #extent# across different symptoms to diversify linguistic expressions.",
"Two linguistically trained researchers constructed expression pools for each symptom and each attribute to account for different types of paraphrasing and descriptions. These expression pools are used in Section SECREF20 (c)."
],
[
"Figure FIGREF15 shows the five steps we use to generate multi-turn symptom monitoring dialogue samples.",
"(a) Topic Selection: While nurses might prefer to inquire the symptoms in different orders depending on the patient's history, our preliminary analysis shows that modeling results do not differ noticeably if topics are of equal prior probabilities. Thus we adopt this assumption for simplicity.",
"(b) Template Selection: For each selected topic, one inquiry template and one response template are randomly chosen to compose a turn. To minimize adverse effects of underfitting, we redistributed the frequency distribution in Table TABREF14 : For utterance types that are below 15%, we boosted them to 15%, and the overall relative distribution ranking is balanced and consistent with Table TABREF14 .",
"(c) Enriching Linguistic Expressions: The placeholders in the selected templates are substituted with diverse expressions from the expression pools in Section UID19 to characterize the symptoms and their corresponding attributes.",
"(d) Multi-Turn Dialogue State Tracking: A greedy algorithm is applied to complete conversations. A “completed symptoms” list and a “to-do symptoms” list are used for symptom topic tracking. We also track the “completed attributes\" and “to-do attributes\". For each symptom, all related attributes are iterated. A dialogue ends only when all possible entities are exhausted, generating a multi-turn dialogue sample, which encourages the model to learn from the entire discussion flow rather than a single turn to comprehend contextual dependency. The average length of a simulated dialogue is 184 words, which happens to be twice as long as an average dialogue from the real-world evaluation set. Moreover, to model the roles of the respondents, we set the ratio between patients and caregivers to 2:1; this statistic is inspired by the real scenarios in the seed dataset. For both the caregivers and patients, we assume equal probability of both genders. The corresponding pronouns in the conversations are thus determined by the role and gender of these settings.",
"(e) Multi-Turn Sample Annotation: For each multi-turn dialogue, a query is specified by a symptom and an attribute. The groundtruth output of the QA system is automatically labeled based on the template generation rules, but also manually verified to ensure annotation quality. Moreover, we adopt the unanswerable design in BIBREF6 : when the patient does not mention a particular symptom, the answer is defined as “No Answer”. This process is repeated until all logical permutations of symptoms and attributes are exhausted."
],
[
""
],
[
" We implemented an established model in reading comprehension, a bi-directional attention pointer network BIBREF1 , and equipped it with an answerable classifier, as depicted in Figure FIGREF21 . First, tokens in the given dialogue INLINEFORM0 and query INLINEFORM1 are converted into embedding vectors. Then the dialogue embeddings are fed to a bi-directional LSTM encoding layer, generating a sequence of contextual hidden states. Next, the hidden states and query embeddings are processed by a bi-directional attention layer, fusing attention information in both context-to-query and query-to-context directions. The following two bi-directional LSTM modeling layers read the contextual sequence with attention. Finally, two respective linear layers with softmax functions are used to estimate token INLINEFORM2 's INLINEFORM3 and INLINEFORM4 probability of the answer span INLINEFORM5 .",
"In addition, we add a special tag “[SEQ]” at the head of INLINEFORM0 to account for the case of “No answer” BIBREF4 and adopt an answerable classifier as in BIBREF25 . More specifically, when the queried symptom or attribute is not mentioned in the dialogue, the answer span should point to the tag “[SEQ]” and answerable probability should be predicted as 0."
],
[
"The model was trained via gradient backpropagation with the cross-entropy loss function of answer span prediction and answerable classification, optimized by Adam algorithm BIBREF26 with initial learning rate of INLINEFORM0 . Pre-trained GloVe BIBREF3 embeddings (size INLINEFORM1 ) were used. We re-shuffled training samples at each epoch (batch size INLINEFORM2 ). Out-of-vocabulary words ( INLINEFORM3 ) were replaced with a fixed random vector. L2 regularization and dropout (rate INLINEFORM4 ) were used to alleviate overfitting BIBREF27 ."
],
[
"To evaluate the effectiveness of our linguistically-inspired simulation approach, the model is trained on the simulated data (see Section SECREF20 ). We designed 3 evaluation sets: (1) Base Set (1,264 samples) held out from the simulated data. (2) Augmented Set (1,280 samples) built by adding two out-of-distribution symptoms, with corresponding dialogue contents and queries, to the Base Set (“bleeding” and “cold”, which never appeared in training data). (3) Real-World Set (944 samples) manually delineated from the the symptom checking portions (approximately 4 hours) of real-world dialogues, and annotated as evaluation samples."
],
[
"Evaluation results are in Table TABREF25 with exact match (EM) and F1 score in BIBREF0 metrics. To distinguish the correct answer span from the plausible ones which contain the same words, we measure the scores on the position indices of tokens. Our results show that both EM and F1 score increase with training sample size growing and the optimal size in our setting is 100k. The best-trained model performs well on both the Base Set and the Augmented Set, indicating that out-of-distribution symptoms do not affect the comprehension of existing symptoms and outputs reasonable answers for both in- and out-of-distribution symptoms. On the Real-World Set, we obtained 78.23 EM score and 80.18 F1 score respectively.",
"Error analysis suggests the performance drop from the simulated test sets is due to the following: 1) sparsity issues resulting from the expression pools excluding various valid but sporadic expressions. 2) nurses and patients occasionally chit-chat in the Real-World Set, which is not simulated in the training set. At times, these chit-chats make the conversations overly lengthy, causing the information density to be lower. These issues could potentially distract and confuse the comprehension model. 3) an interesting type of infrequent error source, caused by patients elaborating on possible causal relations of two symptoms. For example, a patient might say “My giddiness may be due to all this cough”. We are currently investigating how to close this performance gap efficiently."
],
[
"To assess the effectiveness of bi-directional attention, we bypassed the bi-attention layer by directly feeding the contextual hidden states and query embeddings to the modeling layer. To evaluate the pre-trained GloVe embeddings, we randomly initialized and trained the embeddings from scratch. These two procedures lead to 10% and 18% performance degradation on the Augmented Set and Real-World Set, respectively (see Table TABREF27 )."
],
[
"We formulated a dialogue comprehension task motivated by the need in telehealth settings to extract key clinical information from spoken conversations between nurses and patients. We analyzed linguistic characteristics of real-world human-human symptom checking dialogues, constructed a simulated dataset based on linguistically inspired and clinically validated templates, and prototyped a QA system. The model works effectively on a simulated test set using symptoms excluded during training and on real-world conversations between nurses and patients. We are currently improving the model's dialogue comprehension capability in complex reasoning and context understanding and also applying the QA model to summarization and virtual nurse applications."
],
[
"Research efforts were supported by funding for Digital Health and Deep Learning I2R (DL2 SSF Project No: A1718g0045) and the Science and Engineering Research Council (SERC Project No: A1818g0044), A*STAR, Singapore. In addition, this work was conducted using resources and infrastructure provided by the Human Language Technology unit at I2R. The telehealth data acquisition was funded by the Economic Development Board (EDB), Singapore Living Lab Fund and Philips Electronics – Hospital to Home Pilot Project (EDB grant reference number: S14-1035-RF-LLF H and W).",
"We acknowledge valuable support and assistance from Yulong Lin, Yuanxin Xiang, and Ai Ti Aw at the Institute for Infocomm Research (I2R); Weiliang Huang at the Changi General Hospital (CGH) Department of Cardiology, Hong Choon Oh at CGH Health Services Research, and Edris Atikah Bte Ahmad, Chiu Yan Ang, and Mei Foon Yap of the CGH Integrated Care Department.",
"We also thank Eduard Hovy and Bonnie Webber for insightful discussions and the anonymous reviewers for their precious feedback to help improve and extend this piece of work."
]
],
"section_name": [
"Problem Statement",
"Motivation of Approach",
"Human-human Spoken Conversations",
"Dialogue Comprehension Task",
"Reading Comprehension",
"NLP for Healthcare",
"Data Preparation",
"Linguistic Characterization on Seed Data",
"Simulating Symptom Monitoring Dataset for Training",
"Template Construction",
"Simulated Data Generation Framework",
"Experiments",
"Model Design",
"Implementation Details",
"Evaluation Setup",
"Results",
"Ablation Analysis",
"Conclusion",
"Acknowledgements"
]
} | {
"answers": [
{
"annotation_id": [
"345bc8bbd8a775f148bebc6531b087b2960f6df9"
],
"answer": [
{
"evidence": [
"The dataset comprises a total of 353 conversations from 40 speakers (11 nurses, 16 patients, and 13 caregivers) with consent to the use of anonymized data for research. The speakers are 38 to 88 years old, equally distributed across gender, and comprise a range of ethnic groups (55% Chinese, 17% Malay, 14% Indian, 3% Eurasian, and 11% unspecified). The conversations cover 11 topics (e.g., medication compliance, symptom checking, education, greeting) and 9 symptoms (e.g., chest pain, cough) and amount to 41 hours.",
"We divide the construction of data simulation into two stages. In Section SECREF16 , we build templates and expression pools using linguistic analysis followed by manual verification. In Section SECREF20 , we present our proposed framework for generating simulated training data. The templates and framework are verified for logical correctness and clinical soundness."
],
"extractive_spans": [
"353 conversations from 40 speakers (11 nurses, 16 patients, and 13 caregivers)",
"we build templates and expression pools using linguistic analysis"
],
"free_form_answer": "",
"highlighted_evidence": [
"The dataset comprises a total of 353 conversations from 40 speakers (11 nurses, 16 patients, and 13 caregivers) with consent to the use of anonymized data for research.",
"In Section SECREF16 , we build templates and expression pools using linguistic analysis followed by manual verification."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"0f79bfeb342e439695239e27350cd6632987550e",
"1450d5aa10e0aafcb7ebc972a658c9c06349fe38"
],
"answer": [
{
"evidence": [
"We used recordings of nurse-initiated telephone conversations for congestive heart failure patients undergoing telemonitoring, post-discharge from the hospital. The clinical data was acquired by the Health Management Unit at Changi General Hospital. This research study was approved by the SingHealth Centralised Institutional Review Board (Protocol 1556561515). The patients were recruited during 2014-2016 as part of their routine care delivery, and enrolled into the telemonitoring health management program with consent for use of anonymized versions of their data for research.",
"To analyze the linguistic structure of the inquiry-response pairs in the entire 41-hour dataset, we randomly sampled a seed dataset consisting of 1,200 turns and manually categorized them to different types, which are summarized in Table TABREF14 along with the corresponding occurrence frequency statistics. Note that each given utterance could be categorized to more than one type. We elaborate on each utterance type below."
],
"extractive_spans": [],
"free_form_answer": "A sample from nurse-initiated telephone conversations for congestive heart failure patients undergoing telepmonitoring, post-discharge from the Health Management Unit at Changi General Hospital",
"highlighted_evidence": [
"We used recordings of nurse-initiated telephone conversations for congestive heart failure patients undergoing telemonitoring, post-discharge from the hospital.",
"To analyze the linguistic structure of the inquiry-response pairs in the entire 41-hour dataset, we randomly sampled a seed dataset consisting of 1,200 turns and manually categorized them to different types, which are summarized in Table TABREF14 along with the corresponding occurrence frequency statistics."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We used recordings of nurse-initiated telephone conversations for congestive heart failure patients undergoing telemonitoring, post-discharge from the hospital. The clinical data was acquired by the Health Management Unit at Changi General Hospital. This research study was approved by the SingHealth Centralised Institutional Review Board (Protocol 1556561515). The patients were recruited during 2014-2016 as part of their routine care delivery, and enrolled into the telemonitoring health management program with consent for use of anonymized versions of their data for research."
],
"extractive_spans": [
"recordings of nurse-initiated telephone conversations for congestive heart failure patients"
],
"free_form_answer": "",
"highlighted_evidence": [
"We used recordings of nurse-initiated telephone conversations for congestive heart failure patients undergoing telemonitoring, post-discharge from the hospital."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"3e2bb4be116f4bbad32e72f2bf1bd942df968983",
"eb3123566c4784e0d415f06e4936e31e1bc0dd2b"
],
"answer": [
{
"evidence": [
"Figure FIGREF5 (b) illustrates the proposed dialogue comprehension task using a question answering (QA) model. The input are a multi-turn symptom checking dialogue INLINEFORM0 and a query INLINEFORM1 specifying a symptom with one of its attributes; the output is the extracted answer INLINEFORM2 from the given dialogue. A training or test sample is defined as INLINEFORM3 . Five attributes, specifying certain details of clinical significance, are defined to characterize the answer types of INLINEFORM4 : (1) the time the patient has been experiencing the symptom, (2) activities that trigger the symptom (to occur or worsen), (3) the extent of seriousness, (4) the frequency occurrence of the symptom, and (5) the location of symptom. For each symptom/attribute, it can take on different linguistic expressions, defined as entities. Note that if the queried symptom or attribute is not mentioned in the dialogue, the groundtruth output is “No Answer”, as in BIBREF6 ."
],
"extractive_spans": [
"(1) the time the patient has been experiencing the symptom, (2) activities that trigger the symptom (to occur or worsen), (3) the extent of seriousness, (4) the frequency occurrence of the symptom, and (5) the location of symptom",
"No Answer"
],
"free_form_answer": "",
"highlighted_evidence": [
"Five attributes, specifying certain details of clinical significance, are defined to characterize the answer types of INLINEFORM4 : (1) the time the patient has been experiencing the symptom, (2) activities that trigger the symptom (to occur or worsen), (3) the extent of seriousness, (4) the frequency occurrence of the symptom, and (5) the location of symptom. For each symptom/attribute, it can take on different linguistic expressions, defined as entities. Note that if the queried symptom or attribute is not mentioned in the dialogue, the groundtruth output is “No Answer”, as in BIBREF6 ."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The dataset comprises a total of 353 conversations from 40 speakers (11 nurses, 16 patients, and 13 caregivers) with consent to the use of anonymized data for research. The speakers are 38 to 88 years old, equally distributed across gender, and comprise a range of ethnic groups (55% Chinese, 17% Malay, 14% Indian, 3% Eurasian, and 11% unspecified). The conversations cover 11 topics (e.g., medication compliance, symptom checking, education, greeting) and 9 symptoms (e.g., chest pain, cough) and amount to 41 hours.",
"Figure FIGREF5 (b) illustrates the proposed dialogue comprehension task using a question answering (QA) model. The input are a multi-turn symptom checking dialogue INLINEFORM0 and a query INLINEFORM1 specifying a symptom with one of its attributes; the output is the extracted answer INLINEFORM2 from the given dialogue. A training or test sample is defined as INLINEFORM3 . Five attributes, specifying certain details of clinical significance, are defined to characterize the answer types of INLINEFORM4 : (1) the time the patient has been experiencing the symptom, (2) activities that trigger the symptom (to occur or worsen), (3) the extent of seriousness, (4) the frequency occurrence of the symptom, and (5) the location of symptom. For each symptom/attribute, it can take on different linguistic expressions, defined as entities. Note that if the queried symptom or attribute is not mentioned in the dialogue, the groundtruth output is “No Answer”, as in BIBREF6 ."
],
"extractive_spans": [
"the time the patient has been experiencing the symptom",
"activities that trigger the symptom",
"the extent of seriousness",
"the frequency occurrence of the symptom",
"the location of symptom",
"9 symptoms"
],
"free_form_answer": "",
"highlighted_evidence": [
"The conversations cover 11 topics (e.g., medication compliance, symptom checking, education, greeting) and 9 symptoms (e.g., chest pain, cough) and amount to 41 hours.",
"Five attributes, specifying certain details of clinical significance, are defined to characterize the answer types of INLINEFORM4 : (1) the time the patient has been experiencing the symptom, (2) activities that trigger the symptom (to occur or worsen), (3) the extent of seriousness, (4) the frequency occurrence of the symptom, and (5) the location of symptom. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"9eae58b095ebc3160b2964ddb69fa3a6341efa2e",
"c71c10a895b68912afb7c00f942c4d7fe3616dd5"
],
"answer": [
{
"evidence": [
"To evaluate the effectiveness of our linguistically-inspired simulation approach, the model is trained on the simulated data (see Section SECREF20 ). We designed 3 evaluation sets: (1) Base Set (1,264 samples) held out from the simulated data. (2) Augmented Set (1,280 samples) built by adding two out-of-distribution symptoms, with corresponding dialogue contents and queries, to the Base Set (“bleeding” and “cold”, which never appeared in training data). (3) Real-World Set (944 samples) manually delineated from the the symptom checking portions (approximately 4 hours) of real-world dialogues, and annotated as evaluation samples."
],
"extractive_spans": [],
"free_form_answer": "1264 instances from simulated data, 1280 instances by adding two out-of-distribution symptoms and 944 instances manually delineated from the symptom checking portions of real-word dialogues",
"highlighted_evidence": [
"We designed 3 evaluation sets: (1) Base Set (1,264 samples) held out from the simulated data. (2) Augmented Set (1,280 samples) built by adding two out-of-distribution symptoms, with corresponding dialogue contents and queries, to the Base Set (“bleeding” and “cold”, which never appeared in training data). (3) Real-World Set (944 samples) manually delineated from the the symptom checking portions (approximately 4 hours) of real-world dialogues, and annotated as evaluation samples."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"To evaluate the effectiveness of our linguistically-inspired simulation approach, the model is trained on the simulated data (see Section SECREF20 ). We designed 3 evaluation sets: (1) Base Set (1,264 samples) held out from the simulated data. (2) Augmented Set (1,280 samples) built by adding two out-of-distribution symptoms, with corresponding dialogue contents and queries, to the Base Set (“bleeding” and “cold”, which never appeared in training data). (3) Real-World Set (944 samples) manually delineated from the the symptom checking portions (approximately 4 hours) of real-world dialogues, and annotated as evaluation samples."
],
"extractive_spans": [
"held out from the simulated data"
],
"free_form_answer": "",
"highlighted_evidence": [
"We designed 3 evaluation sets: (1) Base Set (1,264 samples) held out from the simulated data. (2) Augmented Set (1,280 samples) built by adding two out-of-distribution symptoms, with corresponding dialogue contents and queries, to the Base Set (“bleeding” and “cold”, which never appeared in training data). (3) Real-World Set (944 samples) manually delineated from the the symptom checking portions (approximately 4 hours) of real-world dialogues, and annotated as evaluation samples."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"How big is their created dataset?",
"Which data do they use as a starting point for the dialogue dataset?",
"What labels do they create on their dataset?",
"How do they select instances to their hold-out test set?"
],
"question_id": [
"1b1a30e9e68a9ae76af467e60cefb180d135e285",
"2c85865a65acd429508f50b5e4db9674813d67f2",
"73a7acf33b26f5e9475ee975ba00d14fd06f170f",
"dd53baf26dad3d74872f2d8956c9119a27269bd5"
],
"question_writer": [
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: Dialogue comprehension of symptom checking conversations.",
"Table 1: Linguistic characterization of inquiryresponse types and their occurrence frequency from the seed data in Section 3.2.",
"Figure 2: Simulated data generation framework.",
"Table 2: QA model evaluation results. Each sample is a simulated multi-turn conversation.",
"Figure 3: Bi-directional attention pointer network with an answerable classifier for dialogue comprehension.",
"Table 3: Ablation experiments on 100K training size."
],
"file": [
"2-Figure1-1.png",
"4-Table1-1.png",
"5-Figure2-1.png",
"6-Table2-1.png",
"6-Figure3-1.png",
"7-Table3-1.png"
]
} | [
"Which data do they use as a starting point for the dialogue dataset?",
"How do they select instances to their hold-out test set?"
] | [
[
"1903.03530-Data Preparation-0",
"1903.03530-Linguistic Characterization on Seed Data-0"
],
[
"1903.03530-Evaluation Setup-0"
]
] | [
"A sample from nurse-initiated telephone conversations for congestive heart failure patients undergoing telepmonitoring, post-discharge from the Health Management Unit at Changi General Hospital",
"1264 instances from simulated data, 1280 instances by adding two out-of-distribution symptoms and 944 instances manually delineated from the symptom checking portions of real-word dialogues"
] | 150 |
1809.03449 | Explicit Utilization of General Knowledge in Machine Reading Comprehension | To bridge the gap between Machine Reading Comprehension (MRC) models and human beings, which is mainly reflected in the hunger for data and the robustness to noise, in this paper, we explore how to integrate the neural networks of MRC models with the general knowledge of human beings. On the one hand, we propose a data enrichment method, which uses WordNet to extract inter-word semantic connections as general knowledge from each given passage-question pair. On the other hand, we propose an end-to-end MRC model named as Knowledge Aided Reader (KAR), which explicitly uses the above extracted general knowledge to assist its attention mechanisms. Based on the data enrichment method, KAR is comparable in performance with the state-of-the-art MRC models, and significantly more robust to noise than them. When only a subset (20%-80%) of the training examples are available, KAR outperforms the state-of-the-art MRC models by a large margin, and is still reasonably robust to noise. | {
"paragraphs": [
[
"Machine Reading Comprehension (MRC), as the name suggests, requires a machine to read a passage and answer its relevant questions. Since the answer to each question is supposed to stem from the corresponding passage, a common MRC solution is to develop a neural-network-based MRC model that predicts an answer span (i.e. the answer start position and the answer end position) from the passage of each given passage-question pair. To facilitate the explorations and innovations in this area, many MRC datasets have been established, such as SQuAD BIBREF0 , MS MARCO BIBREF1 , and TriviaQA BIBREF2 . Consequently, many pioneering MRC models have been proposed, such as BiDAF BIBREF3 , R-NET BIBREF4 , and QANet BIBREF5 . According to the leader board of SQuAD, the state-of-the-art MRC models have achieved the same performance as human beings. However, does this imply that they have possessed the same reading comprehension ability as human beings?",
"OF COURSE NOT. There is a huge gap between MRC models and human beings, which is mainly reflected in the hunger for data and the robustness to noise. On the one hand, developing MRC models requires a large amount of training examples (i.e. the passage-question pairs labeled with answer spans), while human beings can achieve good performance on evaluation examples (i.e. the passage-question pairs to address) without training examples. On the other hand, BIBREF6 revealed that intentionally injected noise (e.g. misleading sentences) in evaluation examples causes the performance of MRC models to drop significantly, while human beings are far less likely to suffer from this. The reason for these phenomena, we believe, is that MRC models can only utilize the knowledge contained in each given passage-question pair, but in addition to this, human beings can also utilize general knowledge. A typical category of general knowledge is inter-word semantic connections. As shown in Table TABREF1 , such general knowledge is essential to the reading comprehension ability of human beings.",
"A promising strategy to bridge the gap mentioned above is to integrate the neural networks of MRC models with the general knowledge of human beings. To this end, it is necessary to solve two problems: extracting general knowledge from passage-question pairs and utilizing the extracted general knowledge in the prediction of answer spans. The first problem can be solved with knowledge bases, which store general knowledge in structured forms. A broad variety of knowledge bases are available, such as WordNet BIBREF7 storing semantic knowledge, ConceptNet BIBREF8 storing commonsense knowledge, and Freebase BIBREF9 storing factoid knowledge. In this paper, we limit the scope of general knowledge to inter-word semantic connections, and thus use WordNet as our knowledge base. The existing way to solve the second problem is to encode general knowledge in vector space so that the encoding results can be used to enhance the lexical or contextual representations of words BIBREF10 , BIBREF11 . However, this is an implicit way to utilize general knowledge, since in this way we can neither understand nor control the functioning of general knowledge. In this paper, we discard the existing implicit way and instead explore an explicit (i.e. understandable and controllable) way to utilize general knowledge.",
"The contribution of this paper is two-fold. On the one hand, we propose a data enrichment method, which uses WordNet to extract inter-word semantic connections as general knowledge from each given passage-question pair. On the other hand, we propose an end-to-end MRC model named as Knowledge Aided Reader (KAR), which explicitly uses the above extracted general knowledge to assist its attention mechanisms. Based on the data enrichment method, KAR is comparable in performance with the state-of-the-art MRC models, and significantly more robust to noise than them. When only a subset ( INLINEFORM0 – INLINEFORM1 ) of the training examples are available, KAR outperforms the state-of-the-art MRC models by a large margin, and is still reasonably robust to noise."
],
[
"In this section, we elaborate a WordNet-based data enrichment method, which is aimed at extracting inter-word semantic connections from each passage-question pair in our MRC dataset. The extraction is performed in a controllable manner, and the extracted results are provided as general knowledge to our MRC model."
],
[
"WordNet is a lexical database of English, where words are organized into synsets according to their senses. A synset is a set of words expressing the same sense so that a word having multiple senses belongs to multiple synsets, with each synset corresponding to a sense. Synsets are further related to each other through semantic relations. According to the WordNet interface provided by NLTK BIBREF12 , there are totally sixteen types of semantic relations (e.g. hypernyms, hyponyms, holonyms, meronyms, attributes, etc.). Based on synset and semantic relation, we define a new concept: semantic relation chain. A semantic relation chain is a concatenated sequence of semantic relations, which links a synset to another synset. For example, the synset “keratin.n.01” is related to the synset “feather.n.01” through the semantic relation “substance holonym”, the synset “feather.n.01” is related to the synset “bird.n.01” through the semantic relation “part holonym”, and the synset “bird.n.01” is related to the synset “parrot.n.01” through the semantic relation “hyponym”, thus “substance holonym INLINEFORM0 part holonym INLINEFORM1 hyponym” is a semantic relation chain, which links the synset “keratin.n.01” to the synset “parrot.n.01”. We name each semantic relation in a semantic relation chain as a hop, therefore the above semantic relation chain is a 3-hop chain. By the way, each single semantic relation is equivalent to a 1-hop chain."
],
[
"The key problem in the data enrichment method is determining whether a word is semantically connected to another word. If so, we say that there exists an inter-word semantic connection between them. To solve this problem, we define another new concept: the extended synsets of a word. Given a word INLINEFORM0 , whose synsets are represented as a set INLINEFORM1 , we use another set INLINEFORM2 to represent its extended synsets, which includes all the synsets that are in INLINEFORM3 or that can be linked to from INLINEFORM4 through semantic relation chains. Theoretically, if there is no limitation on semantic relation chains, INLINEFORM5 will include all the synsets in WordNet, which is meaningless in most situations. Therefore, we use a hyper-parameter INLINEFORM6 to represent the permitted maximum hop count of semantic relation chains. That is to say, only the chains having no more than INLINEFORM7 hops can be used to construct INLINEFORM8 so that INLINEFORM9 becomes a function of INLINEFORM10 : INLINEFORM11 (if INLINEFORM12 , we will have INLINEFORM13 ). Based on the above statements, we formulate a heuristic rule for determining inter-word semantic connections: a word INLINEFORM14 is semantically connected to another word INLINEFORM15 if and only if INLINEFORM16 ."
],
[
"Given a passage-question pair, the inter-word semantic connections that connect any word to any passage word are regarded as the general knowledge we need to extract. Considering the requirements of our MRC model, we only extract the positional information of such inter-word semantic connections. Specifically, for each word INLINEFORM0 , we extract a set INLINEFORM1 , which includes the positions of the passage words that INLINEFORM2 is semantically connected to (if INLINEFORM3 itself is a passage word, we will exclude its own position from INLINEFORM4 ). We can control the amount of the extracted results by setting the hyper-parameter INLINEFORM5 : if we set INLINEFORM6 to 0, inter-word semantic connections will only exist between synonyms; if we increase INLINEFORM7 , inter-word semantic connections will exist between more words. That is to say, by increasing INLINEFORM8 within a certain range, we can usually extract more inter-word semantic connections from a passage-question pair, and thus can provide the MRC model with more general knowledge. However, due to the complexity and diversity of natural languages, only a part of the extracted results can serve as useful general knowledge, while the rest of them are useless for the prediction of answer spans, and the proportion of the useless part always rises when INLINEFORM9 is set larger. Therefore we set INLINEFORM10 through cross validation (i.e. according to the performance of the MRC model on the development examples)."
],
[
"In this section, we elaborate our MRC model: Knowledge Aided Reader (KAR). The key components of most existing MRC models are their attention mechanisms BIBREF13 , which are aimed at fusing the associated representations of each given passage-question pair. These attention mechanisms generally fall into two categories: the first one, which we name as mutual attention, is aimed at fusing the question representations into the passage representations so as to obtain the question-aware passage representations; the second one, which we name as self attention, is aimed at fusing the question-aware passage representations into themselves so as to obtain the final passage representations. Although KAR is equipped with both categories, its most remarkable feature is that it explicitly uses the general knowledge extracted by the data enrichment method to assist its attention mechanisms. Therefore we separately name the attention mechanisms of KAR as knowledge aided mutual attention and knowledge aided self attention."
],
[
"Given a passage INLINEFORM0 and a relevant question INLINEFORM1 , the task is to predict an answer span INLINEFORM2 , where INLINEFORM3 , so that the resulting subsequence INLINEFORM4 from INLINEFORM5 is an answer to INLINEFORM6 ."
],
[
"As shown in Figure FIGREF7 , KAR is an end-to-end MRC model consisting of five layers:",
"Lexicon Embedding Layer. This layer maps the words to the lexicon embeddings. The lexicon embedding of each word is composed of its word embedding and character embedding. For each word, we use the pre-trained GloVe BIBREF14 word vector as its word embedding, and obtain its character embedding with a Convolutional Neural Network (CNN) BIBREF15 . For both the passage and the question, we pass the concatenation of the word embeddings and the character embeddings through a shared dense layer with ReLU activation, whose output dimensionality is INLINEFORM0 . Therefore we obtain the passage lexicon embeddings INLINEFORM1 and the question lexicon embeddings INLINEFORM2 .",
"Context Embedding Layer. This layer maps the lexicon embeddings to the context embeddings. For both the passage and the question, we process the lexicon embeddings (i.e. INLINEFORM0 for the passage and INLINEFORM1 for the question) with a shared bidirectional LSTM (BiLSTM) BIBREF16 , whose hidden state dimensionality is INLINEFORM2 . By concatenating the forward LSTM outputs and the backward LSTM outputs, we obtain the passage context embeddings INLINEFORM3 and the question context embeddings INLINEFORM4 .",
"Coarse Memory Layer. This layer maps the context embeddings to the coarse memories. First we use knowledge aided mutual attention (introduced later) to fuse INLINEFORM0 into INLINEFORM1 , the outputs of which are represented as INLINEFORM2 . Then we process INLINEFORM3 with a BiLSTM, whose hidden state dimensionality is INLINEFORM4 . By concatenating the forward LSTM outputs and the backward LSTM outputs, we obtain the coarse memories INLINEFORM5 , which are the question-aware passage representations.",
"Refined Memory Layer. This layer maps the coarse memories to the refined memories. First we use knowledge aided self attention (introduced later) to fuse INLINEFORM0 into themselves, the outputs of which are represented as INLINEFORM1 . Then we process INLINEFORM2 with a BiLSTM, whose hidden state dimensionality is INLINEFORM3 . By concatenating the forward LSTM outputs and the backward LSTM outputs, we obtain the refined memories INLINEFORM4 , which are the final passage representations.",
"Answer Span Prediction Layer. This layer predicts the answer start position and the answer end position based on the above layers. First we obtain the answer start position distribution INLINEFORM0 : INLINEFORM1 INLINEFORM2 ",
"where INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 are trainable parameters; INLINEFORM3 represents the refined memory of each passage word INLINEFORM4 (i.e. the INLINEFORM5 -th column in INLINEFORM6 ); INLINEFORM7 represents the question summary obtained by performing an attention pooling over INLINEFORM8 . Then we obtain the answer end position distribution INLINEFORM9 : INLINEFORM10 INLINEFORM11 ",
"where INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 are trainable parameters; INLINEFORM3 represents vector concatenation. Finally we construct an answer span prediction matrix INLINEFORM4 , where INLINEFORM5 represents the upper triangular matrix of a matrix INLINEFORM6 . Therefore, for the training, we minimize INLINEFORM7 on each training example whose labeled answer span is INLINEFORM8 ; for the inference, we separately take the row index and column index of the maximum element in INLINEFORM9 as INLINEFORM10 and INLINEFORM11 ."
],
[
"As a part of the coarse memory layer, knowledge aided mutual attention is aimed at fusing the question context embeddings INLINEFORM0 into the passage context embeddings INLINEFORM1 , where the key problem is to calculate the similarity between each passage context embedding INLINEFORM2 (i.e. the INLINEFORM3 -th column in INLINEFORM4 ) and each question context embedding INLINEFORM5 (i.e. the INLINEFORM6 -th column in INLINEFORM7 ). To solve this problem, BIBREF3 proposed a similarity function: INLINEFORM8 ",
"where INLINEFORM0 is a trainable parameter; INLINEFORM1 represents element-wise multiplication. This similarity function has also been adopted by several other works BIBREF17 , BIBREF5 . However, since context embeddings contain high-level information, we believe that introducing the pre-extracted general knowledge into the calculation of such similarities will make the results more reasonable. Therefore we modify the above similarity function to the following form: INLINEFORM2 ",
"where INLINEFORM0 represents the enhanced context embedding of a word INLINEFORM1 . We use the pre-extracted general knowledge to construct the enhanced context embeddings. Specifically, for each word INLINEFORM2 , whose context embedding is INLINEFORM3 , to construct its enhanced context embedding INLINEFORM4 , first recall that we have extracted a set INLINEFORM5 , which includes the positions of the passage words that INLINEFORM6 is semantically connected to, thus by gathering the columns in INLINEFORM7 whose indexes are given by INLINEFORM8 , we obtain the matching context embeddings INLINEFORM9 . Then by constructing a INLINEFORM10 -attended summary of INLINEFORM11 , we obtain the matching vector INLINEFORM12 (if INLINEFORM13 , which makes INLINEFORM14 , we will set INLINEFORM15 ): INLINEFORM16 INLINEFORM17 ",
"where INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 are trainable parameters; INLINEFORM3 represents the INLINEFORM4 -th column in INLINEFORM5 . Finally we pass the concatenation of INLINEFORM6 and INLINEFORM7 through a dense layer with ReLU activation, whose output dimensionality is INLINEFORM8 . Therefore we obtain the enhanced context embedding INLINEFORM9 .",
"Based on the modified similarity function and the enhanced context embeddings, to perform knowledge aided mutual attention, first we construct a knowledge aided similarity matrix INLINEFORM0 , where each element INLINEFORM1 . Then following BIBREF5 , we construct the passage-attended question summaries INLINEFORM2 and the question-attended passage summaries INLINEFORM3 : INLINEFORM4 INLINEFORM5 ",
"where INLINEFORM0 represents softmax along the row dimension and INLINEFORM1 along the column dimension. Finally following BIBREF17 , we pass the concatenation of INLINEFORM2 , INLINEFORM3 , INLINEFORM4 , and INLINEFORM5 through a dense layer with ReLU activation, whose output dimensionality is INLINEFORM6 . Therefore we obtain the outputs INLINEFORM7 ."
],
[
"As a part of the refined memory layer, knowledge aided self attention is aimed at fusing the coarse memories INLINEFORM0 into themselves. If we simply follow the self attentions of other works BIBREF4 , BIBREF18 , BIBREF19 , BIBREF17 , then for each passage word INLINEFORM1 , we should fuse its coarse memory INLINEFORM2 (i.e. the INLINEFORM3 -th column in INLINEFORM4 ) with the coarse memories of all the other passage words. However, we believe that this is both unnecessary and distracting, since each passage word has nothing to do with many of the other passage words. Thus we use the pre-extracted general knowledge to guarantee that the fusion of coarse memories for each passage word will only involve a precise subset of the other passage words. Specifically, for each passage word INLINEFORM5 , whose coarse memory is INLINEFORM6 , to perform the fusion of coarse memories, first recall that we have extracted a set INLINEFORM7 , which includes the positions of the other passage words that INLINEFORM8 is semantically connected to, thus by gathering the columns in INLINEFORM9 whose indexes are given by INLINEFORM10 , we obtain the matching coarse memories INLINEFORM11 . Then by constructing a INLINEFORM12 -attended summary of INLINEFORM13 , we obtain the matching vector INLINEFORM14 (if INLINEFORM15 , which makes INLINEFORM16 , we will set INLINEFORM17 ): INLINEFORM18 INLINEFORM19 ",
"where INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 are trainable parameters. Finally we pass the concatenation of INLINEFORM3 and INLINEFORM4 through a dense layer with ReLU activation, whose output dimensionality is INLINEFORM5 . Therefore we obtain the fusion result INLINEFORM6 , and further the outputs INLINEFORM7 ."
],
[
"Attention Mechanisms. Besides those mentioned above, other interesting attention mechanisms include performing multi-round alignment to avoid the problems of attention redundancy and attention deficiency BIBREF20 , and using mutual attention as a skip-connector to densely connect pairwise layers BIBREF21 .",
"Data Augmentation. It is proved that properly augmenting training examples can improve the performance of MRC models. For example, BIBREF22 trained a generative model to generate questions based on unlabeled text, which substantially boosted their performance; BIBREF5 trained a back-and-forth translation model to paraphrase training examples, which brought them a significant performance gain.",
"Multi-step Reasoning. Inspired by the fact that human beings are capable of understanding complex documents by reading them over and over again, multi-step reasoning was proposed to better deal with difficult MRC tasks. For example, BIBREF23 used reinforcement learning to dynamically determine the number of reasoning steps; BIBREF19 fixed the number of reasoning steps, but used stochastic dropout in the output layer to avoid step bias.",
"Linguistic Embeddings. It is both easy and effective to incorporate linguistic embeddings into the input layer of MRC models. For example, BIBREF24 and BIBREF19 used POS embeddings and NER embeddings to construct their input embeddings; BIBREF25 used structural embeddings based on parsing trees to constructed their input embeddings.",
"Transfer Learning. Several recent breakthroughs in MRC benefit from feature-based transfer learning BIBREF26 , BIBREF27 and fine-tuning-based transfer learning BIBREF28 , BIBREF29 , which are based on certain word-level or sentence-level models pre-trained on large external corpora in certain supervised or unsupervised manners."
],
[
"MRC Dataset. The MRC dataset used in this paper is SQuAD 1.1, which contains over INLINEFORM0 passage-question pairs and has been randomly partitioned into three parts: a training set ( INLINEFORM1 ), a development set ( INLINEFORM2 ), and a test set ( INLINEFORM3 ). Besides, we also use two of its adversarial sets, namely AddSent and AddOneSent BIBREF6 , to evaluate the robustness to noise of MRC models. The passages in the adversarial sets contain misleading sentences, which are aimed at distracting MRC models. Specifically, each passage in AddSent contains several sentences that are similar to the question but not contradictory to the answer, while each passage in AddOneSent contains a human-approved random sentence that may be unrelated to the passage.",
"Implementation Details. We tokenize the MRC dataset with spaCy 2.0.13 BIBREF30 , manipulate WordNet 3.0 with NLTK 3.3, and implement KAR with TensorFlow 1.11.0 BIBREF31 . For the data enrichment method, we set the hyper-parameter INLINEFORM0 to 3. For the dense layers and the BiLSTMs, we set the dimensionality unit INLINEFORM1 to 600. For model optimization, we apply the Adam BIBREF32 optimizer with a learning rate of INLINEFORM2 and a mini-batch size of 32. For model evaluation, we use Exact Match (EM) and F1 score as evaluation metrics. To avoid overfitting, we apply dropout BIBREF33 to the dense layers and the BiLSTMs with a dropout rate of INLINEFORM3 . To boost the performance, we apply exponential moving average with a decay rate of INLINEFORM4 ."
],
[
"We compare KAR with other MRC models in both performance and the robustness to noise. Specifically, we not only evaluate the performance of KAR on the development set and the test set, but also do this on the adversarial sets. As for the comparative objects, we only consider the single MRC models that rank in the top 20 on the SQuAD 1.1 leader board and have reported their performance on the adversarial sets. There are totally five such comparative objects, which can be considered as representatives of the state-of-the-art MRC models. As shown in Table TABREF12 , on the development set and the test set, the performance of KAR is on par with that of the state-of-the-art MRC models; on the adversarial sets, KAR outperforms the state-of-the-art MRC models by a large margin. That is to say, KAR is comparable in performance with the state-of-the-art MRC models, and significantly more robust to noise than them.",
"To verify the effectiveness of general knowledge, we first study the relationship between the amount of general knowledge and the performance of KAR. As shown in Table TABREF13 , by increasing INLINEFORM0 from 0 to 5 in the data enrichment method, the amount of general knowledge rises monotonically, but the performance of KAR first rises until INLINEFORM1 reaches 3 and then drops down. Then we conduct an ablation study by replacing the knowledge aided attention mechanisms with the mutual attention proposed by BIBREF3 and the self attention proposed by BIBREF4 separately, and find that the F1 score of KAR drops by INLINEFORM2 on the development set, INLINEFORM3 on AddSent, and INLINEFORM4 on AddOneSent. Finally we find that after only one epoch of training, KAR already achieves an EM of INLINEFORM5 and an F1 score of INLINEFORM6 on the development set, which is even better than the final performance of several strong baselines, such as DCN (EM / F1: INLINEFORM7 / INLINEFORM8 ) BIBREF36 and BiDAF (EM / F1: INLINEFORM9 / INLINEFORM10 ) BIBREF3 . The above empirical findings imply that general knowledge indeed plays an effective role in KAR.",
"To demonstrate the advantage of our explicit way to utilize general knowledge over the existing implicit way, we compare the performance of KAR with that reported by BIBREF10 , which used an encoding-based method to utilize the general knowledge dynamically retrieved from Wikipedia and ConceptNet. Since their best model only achieved an EM of INLINEFORM0 and an F1 score of INLINEFORM1 on the development set, which is much lower than the performance of KAR, we have good reason to believe that our explicit way works better than the existing implicit way."
],
[
"We compare KAR with other MRC models in the hunger for data. Specifically, instead of using all the training examples, we produce several training subsets (i.e. subsets of the training examples) so as to study the relationship between the proportion of the available training examples and the performance. We produce each training subset by sampling a specific number of questions from all the questions relevant to each passage. By separately sampling 1, 2, 3, and 4 questions on each passage, we obtain four training subsets, which separately contain INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , and INLINEFORM3 of the training examples. As shown in Figure FIGREF15 , with KAR, SAN (re-implemented), and QANet (re-implemented without data augmentation) trained on these training subsets, we evaluate their performance on the development set, and find that KAR performs much better than SAN and QANet. As shown in Figure FIGREF16 and Figure FIGREF17 , with the above KAR, SAN, and QANet trained on the same training subsets, we also evaluate their performance on the adversarial sets, and still find that KAR performs much better than SAN and QANet. That is to say, when only a subset of the training examples are available, KAR outperforms the state-of-the-art MRC models by a large margin, and is still reasonably robust to noise."
],
[
"According to the experimental results, KAR is not only comparable in performance with the state-of-the-art MRC models, but also superior to them in terms of both the hunger for data and the robustness to noise. The reasons for these achievements, we believe, are as follows:"
],
[
"In this paper, we innovatively integrate the neural networks of MRC models with the general knowledge of human beings. Specifically, inter-word semantic connections are first extracted from each given passage-question pair by a WordNet-based data enrichment method, and then provided as general knowledge to an end-to-end MRC model named as Knowledge Aided Reader (KAR), which explicitly uses the general knowledge to assist its attention mechanisms. Experimental results show that KAR is not only comparable in performance with the state-of-the-art MRC models, but also superior to them in terms of both the hunger for data and the robustness to noise. In the future, we plan to use some larger knowledge bases, such as ConceptNet and Freebase, to improve the quality and scope of the general knowledge."
],
[
"This work is partially supported by a research donation from iFLYTEK Co., Ltd., Hefei, China, and a discovery grant from Natural Sciences and Engineering Research Council (NSERC) of Canada."
]
],
"section_name": [
"Introduction",
"Data Enrichment Method",
"Semantic Relation Chain",
"Inter-word Semantic Connection",
"General Knowledge Extraction",
"Knowledge Aided Reader",
"Task Definition",
"Overall Architecture",
"Knowledge Aided Mutual Attention",
"Knowledge Aided Self Attention",
"Related Works",
"Experimental Settings",
"Model Comparison in both Performance and the Robustness to Noise",
"Model Comparison in the Hunger for Data",
"Analysis",
"Conclusion",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"3e603d65b339c0c13d718455a166e3127293cf89",
"41a0f1f9fe6f9624aac3ef583cdc9a23b6ee1b18"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [
"A promising strategy to bridge the gap mentioned above is to integrate the neural networks of MRC models with the general knowledge of human beings. To this end, it is necessary to solve two problems: extracting general knowledge from passage-question pairs and utilizing the extracted general knowledge in the prediction of answer spans. The first problem can be solved with knowledge bases, which store general knowledge in structured forms. A broad variety of knowledge bases are available, such as WordNet BIBREF7 storing semantic knowledge, ConceptNet BIBREF8 storing commonsense knowledge, and Freebase BIBREF9 storing factoid knowledge. In this paper, we limit the scope of general knowledge to inter-word semantic connections, and thus use WordNet as our knowledge base. The existing way to solve the second problem is to encode general knowledge in vector space so that the encoding results can be used to enhance the lexical or contextual representations of words BIBREF10 , BIBREF11 . However, this is an implicit way to utilize general knowledge, since in this way we can neither understand nor control the functioning of general knowledge. In this paper, we discard the existing implicit way and instead explore an explicit (i.e. understandable and controllable) way to utilize general knowledge.",
"WordNet is a lexical database of English, where words are organized into synsets according to their senses. A synset is a set of words expressing the same sense so that a word having multiple senses belongs to multiple synsets, with each synset corresponding to a sense. Synsets are further related to each other through semantic relations. According to the WordNet interface provided by NLTK BIBREF12 , there are totally sixteen types of semantic relations (e.g. hypernyms, hyponyms, holonyms, meronyms, attributes, etc.). Based on synset and semantic relation, we define a new concept: semantic relation chain. A semantic relation chain is a concatenated sequence of semantic relations, which links a synset to another synset. For example, the synset “keratin.n.01” is related to the synset “feather.n.01” through the semantic relation “substance holonym”, the synset “feather.n.01” is related to the synset “bird.n.01” through the semantic relation “part holonym”, and the synset “bird.n.01” is related to the synset “parrot.n.01” through the semantic relation “hyponym”, thus “substance holonym INLINEFORM0 part holonym INLINEFORM1 hyponym” is a semantic relation chain, which links the synset “keratin.n.01” to the synset “parrot.n.01”. We name each semantic relation in a semantic relation chain as a hop, therefore the above semantic relation chain is a 3-hop chain. By the way, each single semantic relation is equivalent to a 1-hop chain."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"In this paper, we limit the scope of general knowledge to inter-word semantic connections, and thus use WordNet as our knowledge base.",
"WordNet is a lexical database of English, where words are organized into synsets according to their senses."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"3af2897323451fbaab795d1bdfa5f5f2706617a3",
"e8c7cc12e163d067d5605b45d396fa496715397e"
],
"answer": [
{
"evidence": [
"MRC Dataset. The MRC dataset used in this paper is SQuAD 1.1, which contains over INLINEFORM0 passage-question pairs and has been randomly partitioned into three parts: a training set ( INLINEFORM1 ), a development set ( INLINEFORM2 ), and a test set ( INLINEFORM3 ). Besides, we also use two of its adversarial sets, namely AddSent and AddOneSent BIBREF6 , to evaluate the robustness to noise of MRC models. The passages in the adversarial sets contain misleading sentences, which are aimed at distracting MRC models. Specifically, each passage in AddSent contains several sentences that are similar to the question but not contradictory to the answer, while each passage in AddOneSent contains a human-approved random sentence that may be unrelated to the passage.",
"We compare KAR with other MRC models in both performance and the robustness to noise. Specifically, we not only evaluate the performance of KAR on the development set and the test set, but also do this on the adversarial sets. As for the comparative objects, we only consider the single MRC models that rank in the top 20 on the SQuAD 1.1 leader board and have reported their performance on the adversarial sets. There are totally five such comparative objects, which can be considered as representatives of the state-of-the-art MRC models. As shown in Table TABREF12 , on the development set and the test set, the performance of KAR is on par with that of the state-of-the-art MRC models; on the adversarial sets, KAR outperforms the state-of-the-art MRC models by a large margin. That is to say, KAR is comparable in performance with the state-of-the-art MRC models, and significantly more robust to noise than them."
],
"extractive_spans": [],
"free_form_answer": "By evaluating their model on adversarial sets containing misleading sentences",
"highlighted_evidence": [
"Besides, we also use two of its adversarial sets, namely AddSent and AddOneSent BIBREF6 , to evaluate the robustness to noise of MRC models. The passages in the adversarial sets contain misleading sentences, which are aimed at distracting MRC models. ",
"Specifically, we not only evaluate the performance of KAR on the development set and the test set, but also do this on the adversarial sets. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"MRC Dataset. The MRC dataset used in this paper is SQuAD 1.1, which contains over INLINEFORM0 passage-question pairs and has been randomly partitioned into three parts: a training set ( INLINEFORM1 ), a development set ( INLINEFORM2 ), and a test set ( INLINEFORM3 ). Besides, we also use two of its adversarial sets, namely AddSent and AddOneSent BIBREF6 , to evaluate the robustness to noise of MRC models. The passages in the adversarial sets contain misleading sentences, which are aimed at distracting MRC models. Specifically, each passage in AddSent contains several sentences that are similar to the question but not contradictory to the answer, while each passage in AddOneSent contains a human-approved random sentence that may be unrelated to the passage."
],
"extractive_spans": [
"we also use two of its adversarial sets, namely AddSent and AddOneSent BIBREF6 , to evaluate the robustness to noise"
],
"free_form_answer": "",
"highlighted_evidence": [
"Besides, we also use two of its adversarial sets, namely AddSent and AddOneSent BIBREF6 , to evaluate the robustness to noise of MRC models."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"c24db7234309d8f1b7e337ea0cfed6fa1fb3c882"
],
"answer": [
{
"evidence": [
"As shown in Figure FIGREF7 , KAR is an end-to-end MRC model consisting of five layers:",
"Lexicon Embedding Layer. This layer maps the words to the lexicon embeddings. The lexicon embedding of each word is composed of its word embedding and character embedding. For each word, we use the pre-trained GloVe BIBREF14 word vector as its word embedding, and obtain its character embedding with a Convolutional Neural Network (CNN) BIBREF15 . For both the passage and the question, we pass the concatenation of the word embeddings and the character embeddings through a shared dense layer with ReLU activation, whose output dimensionality is INLINEFORM0 . Therefore we obtain the passage lexicon embeddings INLINEFORM1 and the question lexicon embeddings INLINEFORM2 .",
"Context Embedding Layer. This layer maps the lexicon embeddings to the context embeddings. For both the passage and the question, we process the lexicon embeddings (i.e. INLINEFORM0 for the passage and INLINEFORM1 for the question) with a shared bidirectional LSTM (BiLSTM) BIBREF16 , whose hidden state dimensionality is INLINEFORM2 . By concatenating the forward LSTM outputs and the backward LSTM outputs, we obtain the passage context embeddings INLINEFORM3 and the question context embeddings INLINEFORM4 .",
"Coarse Memory Layer. This layer maps the context embeddings to the coarse memories. First we use knowledge aided mutual attention (introduced later) to fuse INLINEFORM0 into INLINEFORM1 , the outputs of which are represented as INLINEFORM2 . Then we process INLINEFORM3 with a BiLSTM, whose hidden state dimensionality is INLINEFORM4 . By concatenating the forward LSTM outputs and the backward LSTM outputs, we obtain the coarse memories INLINEFORM5 , which are the question-aware passage representations.",
"Refined Memory Layer. This layer maps the coarse memories to the refined memories. First we use knowledge aided self attention (introduced later) to fuse INLINEFORM0 into themselves, the outputs of which are represented as INLINEFORM1 . Then we process INLINEFORM2 with a BiLSTM, whose hidden state dimensionality is INLINEFORM3 . By concatenating the forward LSTM outputs and the backward LSTM outputs, we obtain the refined memories INLINEFORM4 , which are the final passage representations.",
"Answer Span Prediction Layer. This layer predicts the answer start position and the answer end position based on the above layers. First we obtain the answer start position distribution INLINEFORM0 : INLINEFORM1 INLINEFORM2"
],
"extractive_spans": [
"Lexicon Embedding Layer",
"Context Embedding Layer",
"Coarse Memory Layer",
"Refined Memory Layer",
"Answer Span Prediction Layer"
],
"free_form_answer": "",
"highlighted_evidence": [
"As shown in Figure FIGREF7 , KAR is an end-to-end MRC model consisting of five layers:\n\nLexicon Embedding Layer. This layer maps the words to the lexicon embeddings.",
"Context Embedding Layer. This layer maps the lexicon embeddings to the context embeddings.",
"Coarse Memory Layer. This layer maps the context embeddings to the coarse memories.",
"Refined Memory Layer. This layer maps the coarse memories to the refined memories.",
"Answer Span Prediction Layer. This layer predicts the answer start position and the answer end position based on the above layers."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"1069bf9ef5efec14211661645001add3191f0133",
"5e16f059f6db3595e99e88d4d2995b32afe61fab"
],
"answer": [
{
"evidence": [
"To verify the effectiveness of general knowledge, we first study the relationship between the amount of general knowledge and the performance of KAR. As shown in Table TABREF13 , by increasing INLINEFORM0 from 0 to 5 in the data enrichment method, the amount of general knowledge rises monotonically, but the performance of KAR first rises until INLINEFORM1 reaches 3 and then drops down. Then we conduct an ablation study by replacing the knowledge aided attention mechanisms with the mutual attention proposed by BIBREF3 and the self attention proposed by BIBREF4 separately, and find that the F1 score of KAR drops by INLINEFORM2 on the development set, INLINEFORM3 on AddSent, and INLINEFORM4 on AddOneSent. Finally we find that after only one epoch of training, KAR already achieves an EM of INLINEFORM5 and an F1 score of INLINEFORM6 on the development set, which is even better than the final performance of several strong baselines, such as DCN (EM / F1: INLINEFORM7 / INLINEFORM8 ) BIBREF36 and BiDAF (EM / F1: INLINEFORM9 / INLINEFORM10 ) BIBREF3 . The above empirical findings imply that general knowledge indeed plays an effective role in KAR.",
"FLOAT SELECTED: Table 2: The amount of the extraction results and the performance of KAR under each setting for χ."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"To verify the effectiveness of general knowledge, we first study the relationship between the amount of general knowledge and the performance of KAR. As shown in Table TABREF13 , by increasing INLINEFORM0 from 0 to 5 in the data enrichment method, the amount of general knowledge rises monotonically, but the performance of KAR first rises until INLINEFORM1 reaches 3 and then drops down.",
"FLOAT SELECTED: Table 2: The amount of the extraction results and the performance of KAR under each setting for χ."
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"OF COURSE NOT. There is a huge gap between MRC models and human beings, which is mainly reflected in the hunger for data and the robustness to noise. On the one hand, developing MRC models requires a large amount of training examples (i.e. the passage-question pairs labeled with answer spans), while human beings can achieve good performance on evaluation examples (i.e. the passage-question pairs to address) without training examples. On the other hand, BIBREF6 revealed that intentionally injected noise (e.g. misleading sentences) in evaluation examples causes the performance of MRC models to drop significantly, while human beings are far less likely to suffer from this. The reason for these phenomena, we believe, is that MRC models can only utilize the knowledge contained in each given passage-question pair, but in addition to this, human beings can also utilize general knowledge. A typical category of general knowledge is inter-word semantic connections. As shown in Table TABREF1 , such general knowledge is essential to the reading comprehension ability of human beings."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"On the other hand, BIBREF6 revealed that intentionally injected noise (e.g. misleading sentences) in evaluation examples causes the performance of MRC models to drop significantly, while human beings are far less likely to suffer from this. The reason for these phenomena, we believe, is that MRC models can only utilize the knowledge contained in each given passage-question pair, but in addition to this, human beings can also utilize general knowledge."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five"
],
"paper_read": [
"",
"",
"",
""
],
"question": [
"Do they report results only on English datasets?",
"How do the authors examine whether a model is robust to noise or not?",
"What type of model is KAR?",
"Do the authors hypothesize that humans' robustness to noise is due to their general knowledge?"
],
"question_id": [
"8bb0011ad1d63996d5650770f3be18abdd9f7fc6",
"b0dbe75047310fec4d4ce787be5c32935fc4e37b",
"d64383e39357bd4177b49c02eb48e12ba7ffd4fb",
"52f9cd05d8312ae3c7a43689804bac63f7cac34b"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
""
]
} | {
"caption": [
"Table 1: Two examples about the effects of semantic level inter-word connections on human reading comprehension. In the first example, we can find the answer because we know “facilitate” and “help” are synonyms. Similarly, in the second example, we can find the answer because we know “borough” is a hypernym of “Brooklyn”, or “Brooklyn” is a hyponym of “borough”. Both of the two examples are selected from SQuAD.",
"Figure 1: Our MRC model: Knowledge Aided Reader (KAR)",
"Table 2: The amount of the extraction results and the performance of KAR under each setting for χ.",
"Table 3: The ablation analysis on the dependency embedding and the synonymy embedding.",
"Table 4: The comparison of different MRC models (published single model performance on the development set of SQuAD)."
],
"file": [
"2-Table1-1.png",
"5-Figure1-1.png",
"8-Table2-1.png",
"8-Table3-1.png",
"8-Table4-1.png"
]
} | [
"How do the authors examine whether a model is robust to noise or not?"
] | [
[
"1809.03449-Model Comparison in both Performance and the Robustness to Noise-0",
"1809.03449-Experimental Settings-0"
]
] | [
"By evaluating their model on adversarial sets containing misleading sentences"
] | 152 |
1806.05513 | Humor Detection in English-Hindi Code-Mixed Social Media Content : Corpus and Baseline System | The tremendous amount of user generated data through social networking sites led to the gaining popularity of automatic text classification in the field of computational linguistics over the past decade. Within this domain, one problem that has drawn the attention of many researchers is automatic humor detection in texts. In depth semantic understanding of the text is required to detect humor which makes the problem difficult to automate. With increase in the number of social media users, many multilingual speakers often interchange between languages while posting on social media which is called code-mixing. It introduces some challenges in the field of linguistic analysis of social media content (Barman et al., 2014), like spelling variations and non-grammatical structures in a sentence. Past researches include detecting puns in texts (Kao et al., 2016) and humor in one-lines (Mihalcea et al., 2010) in a single language, but with the tremendous amount of code-mixed data available online, there is a need to develop techniques which detects humor in code-mixed tweets. In this paper, we analyze the task of humor detection in texts and describe a freely available corpus containing English-Hindi code-mixed tweets annotated with humorous(H) or non-humorous(N) tags. We also tagged the words in the tweets with Language tags (English/Hindi/Others). Moreover, we describe the experiments carried out on the corpus and provide a baseline classification system which distinguishes between humorous and non-humorous texts. | {
"paragraphs": [
[
"“Laughter is the best Medicine” is a saying which is popular with most of the people. Humor is a form of communication that bridges the gap between various languages, cultures, ages and demographics. That's why humorous content with funny and witty hashtags are so much popular on social media. It is a very powerful tool to connect with the audience. Automatic Humor Recognition is the task of determining whether a text contains some level of humorous content or not. First conference on Computational humor was organized in 1996, since then many research have been done in this field. kao2016computational does pun detection in one-liners and dehumor detects humor in Yelp reviews. Because of the complex and interesting aspects involved in detecting humor in texts, it is one of the challenging research field in Natural Language Processing BIBREF3 . Identifying humor in a sentence sometimes require a great amount of external knowledge to completely understand it. There are many types of humor, namely anecdotes, fantasy, insult, irony, jokes, quote, self deprecation etc BIBREF4 , BIBREF5 . Most of the times there are different meanings hidden inside a sentence which is grasped differently by individuals, making the task of humor identification difficult, which is why the development of a generalized algorithm to classify different type of humor is a challenging task.",
"Majority of the researches on social media texts is focused on English. A study by schroeder2010half shows that, a high percentage of these texts are in non-English languages. fischer2011language gives some interesting information about the languages used on Twitter based on the geographical locations. With a huge amount of such user generated data available on social media, there is a need to develop technologies for non-English languages. In multilingual regions like South Asia, majority of the social media users speak more than two languages. In India, Hindi is the most spoken language (spoken by 41% of the population) and English is the official language of the country. Twitter has around 23.2 million monthly active users in India. Native speakers of Hindi often put English words in the sentences and transliterate the whole sentence to Latin script while posting on social media, thereby making the task of automatic text classification a very challenging problem. Linguists came up with a term for any type of language mixing, known as `code-mixing' or `code-switching' BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 . Both the terms are used interchangeably, but there is a slight difference between the two terms. Code-mixing refers to the insertion of words, phrases, and morphemes of one language into a statement or an expression of another language, whereas transliteration of every word in a sentence to another script ( here Devanagari to Latin) is coined code-switching BIBREF10 . The first tweet in Figure 1 is an example of code-mixing and second is an example of code-switching. In this paper, we use code-mixing to denote both cases.",
"In this paper, we present a freely available corpus containing code-mixed tweets in Hindi and English language with tweets written in Latin script. Tweets are manually classified into humorous and non-humorous classes. Moreover, each token in the tweets is also given a language tag which determines the source or origin language of the token (English or Hindi). The paper is divided in sections as follows, we start by describing the corpus and the annotation scheme in Section 2. Section 3 summarizes our supervised classification system which includes pre-processing of the tweets in the dataset and the feature extraction followed by the method used to identify humor in tweets. In the next subsection, we describe the classification model and the results of the experiments conducted using character and word level features. In the last section, we conclude the paper followed by future work and references."
],
[
"In this section, we explain the techniques used in the creation and annotation of the corpus."
],
[
"Python package twitterscraper is used to scrap tweets from twitter. 10,478 tweets from the past two years from domains like `sports', `politics', `entertainment' were extracted. Among those tweets, we manually removed the tweets which were written either in English or Hindi entirely. There were 4161 tweets written in English and 2774 written in Hindi. Finally, a total of 3543 English-Hindi code-mixed tweets were collected. Table 1 describes the number of tweets and words in each category.",
"We tried to make the corpus balanced i.e. uniform distribution of tweets in each category to yield better supervised classification results as described by BIBREF11 ."
],
[
"The final code-mixed tweets were forwarded to a group of three annotators who were university students and fluent in both English and Hindi. Approximately 60 hours were spent in tagging tweets for the presence of humor. Tweets which consisted of any anecdotes, fantasy, irony, jokes, insults were annotated as humorous whereas tweets stating any facts, dialogues or speech which did not contain amusement were put in non-humorous class. Following are some examples of code-mixed tweets in the corpus:",
"“For #WontGiveItBack to work, Dhoni needs to say `Trophy toh ghar par hi bhul aaye' ”",
"(For #WontGiveItBack to work, Dhoni needs to say “We forgot the trophy at home”) .",
"Na sadak na naukri bas badh rahi gundagardi #FailedCMNitish",
"(No roads no naukri, only hooliganism increasing #FailedCMNitish )",
"Subha ka bhula agar sham ko wapas ghar aa jaye then we must thank GPS technology.",
"(If someone lost in the morning return home in the evening then we must thank GPS technology.)",
"She : Gori hai kalaiyan pehna de mujhe hari hari chudiya He *gets green bangles* She : Not this green ya, bottle green color.",
"(She : (sings a song) My wrists are so beautiful, give me green green bangles He *gets green bangles* She : Not this green ya, bottle green color. )",
"Dard dilon ke kam ho jaate... twitter par agar poetries kam ho jaate",
"(pain in the heart will reduce...if number of poetries decreases on twitter).",
"Annotators were given certain guidelines to decide whether a tweet was humorous or not. The context of the tweet could be found by searching about hashtag or keywords used in the tweet. Example (1) uses a hashtag `#WontGiveItBack' which was trending during the ICC cricket world cup 2015. Searching it on Google gave 435k results and the time of the tweet was after the final match of the tournament. So there is an observational humor in (1) as India won the world cup in 2011 and lost in 2015 , hence the tweet was classified as humorous. Any tweets stating any facts, news or reality were classified as non-humorous. There were many tweets which did not contain any hashtags, to understand the context of such tweets annotators selected some keywords from the tweet and searched them online. Example (2) contains a comment towards a political leader towards development and was categorized as non-humorous. Tweets containing normal jokes and funny quotes like in (3) and (4) were put in humorous category. There were some tweets like (5) which consists of poem or lines of a song but modified. Annotators were guided that if such tweets contains satire or any humoristic features, then it could be categorized as humorous otherwise not. There were some tweets which were typical to categorize like (5), hence it was left to the annotators to the best of their understanding. Based on the above guidelines annotators categorized the tweets. To measure inter annotator agreement we opted for Fleiss' Kappa BIBREF12 obtaining an agreement of 0.821 . Both humorous and non-humorous tweets in nearly balanced amount were selected to prepare the corpus. If we had included humorous tweets from one domain like sports and non humorous tweets from another domain like news then, it would have given high performance of classification BIBREF13 . To classify based on the semantics and not on the domain differences, we included both types of tweets from different domains. Many tweets contains a picture along with a caption. Sometimes a caption may not contain humor but combined with the picture, it can provide some degree of humor. Such tweets were removed from the corpus to make the corpus unimodal. In Figure 1, the first tweet, “Anurag Kashyap can never join AAP because ministers took oath `main kisi Anurag aur dwesh ke bina kaam karunga' ” (Anurag Kashyap can never join AAP because ministers took oath `I will work without any affection (Anurag in Hindi) and without hesitation (dwesh in Hindi)'), was classified as humorous. The second tweet, “#SakshiMalik take a bow! #proudIndian #Rio #Olympics #BronzeMedal #girlpower Hamaara khaata khul Gaya!” (#SakshiMalik take a bow! #proudIndian #Rio #Olympics #BronzeMedal #girlpower Our account opened!) was classified as non-humorous as it contains a pride statement."
],
[
"Code-mixing provides some challenges for language identification like spelling variations and non-grammatical structures in a sentence BIBREF0 . Therefore, we annotated the tweets with the language at the word level. Native speakers of Hindi and proficient in English, labelled the language of the tokens in the tweets. Three types of tags were assigned to the tokens , En is assigned to the tokens present in English vocabulary like “family”, “Children” etc. Similarly, Hi is assigned to the tokens present in Hindi vocabulary but transliterated to Latin script like “samay” (time), “aamaadmi” (common man). Rest of the tokens consists of proper nouns, numbers, dates, urls, hashtags, mentions, emojis and punctuations which are labelled as Ot(others). Major concern in language annotation was to annotate the words present in both languages (ambiguous words). For example, `to' (`but' in Hindi) and `is' (`this' in Hindi) , for this scenario annotators understood the context of the tweet and based on that the words were being annotated. For example, consider two sentences `This place is 500 years old' and `Is bar Modi Sarkar'(This time Modi Government). So `is' in first sentence should be tagged as English because it is used as a verb. It is transliterated to English in latter sentence and used as determiner (`this') so it was tagged as Hindi. So these kind of words were tagged based on their meanings and usage in both languages. Figure 1 describes the language annotation of tweets in the corpus."
],
[
"Two example annotations are illustrated in Figure 1. First line in every annotation consists of tweet id. Each tweet is enclosed within <tweet></tweet> tags and each word is enclosed within <word lang=“ ”></word> containing the language annotation for each word. Last line determines the category in which the tweet belongs i.e. humorous or non-humorous, enclosed by <class></class> tags. The annotated dataset with the classification system is made available online ."
],
[
"Social media users often make spelling mistakes or use multiple variations of a word while posting a text. We replaced all such errors with their correct version. For words in Hindi that were transliterated to Latin script, we adopted a common spelling for all those words across the corpus. For example, `dis' is often used as a short form for `this', so we replaced every occurrence of `dis' to `this' in the corpus. Some examples of spelling variations are, `pese' for `paisa' (unit of curreny), `h' for `hai' (is), `ad' for `add' (addition)."
],
[
"In this section, we describe our machine learning model which is trained and tested on the corpus described in the previous section."
],
[
"Preprocessing starts with tokenization, which involves separation of words using space as the delimiter and then converting the words to lower cases which is followed by the removal of punctuation marks from the tokenized tweet. All the hashtags, mentions and urls, are stored and converted to `hashtag', `mention' and `url' respectively. Sometimes hashtags provide some degree of humor in tweets, hence we segregated hashtags on the basis of camel cases and included the tokens in the tokenized tweets (hashtag decomposition) BIBREF14 , BIBREF15 . for example, #AadabArzHai can be decomposed into 3 words, `Aadab', `Arz' and `Hai'. Finally the tokenized tweets are stored along with the presence of humor as the target class."
],
[
"The features used to build attribute vectors for training our classification model are described below. We use character level and word level features for the classification BIBREF15 . For all the features, we separated the words in the tweets based on the language annotation (Section 2.3) and prepared the feature vector for each tweet by combining the vectors for both the languages .",
"Previous researches shows that letter n-grams are very efficient for classifying text. They are language independent and does not require expensive text pre-processing techniques like tokenization, stemming and stop words removal, hence in the case of code-mix texts, this could yield good results BIBREF16 , BIBREF17 . Since the number of n-grams can be very large we took trigrams which occur more than ten times in the corpus.",
"For classifying humor in texts, it is important to understand the semantics of the sentence. Thus, we took a three word window as a feature to train our classification models to incorporate the contextual information.",
"Many jokes and idioms sometimes have common words. We identified those words and took them as as a feature for classification. In the preprocessing step, we decomposed hashtags using camel cases and added them along with the words. Hence, common words in the hashtags were also included in the feature vector."
],
[
"We experimented with four different classifiers, namely, support vector machine BIBREF18 , random forest, extra tree and naive bayes classifier BIBREF19 . Chi square feature selection algorithm is applied to reduces the size of our feature vector. For training our system classifier, we used Scikit-learn BIBREF19 .",
"10-fold cross validation on 3543 code-mixed tweets was carried out by dividing the corpus into 10 equal parts with nine parts as training corpus and rest one for testing. Mean accuracy is calculated by taking the average of the accuracy obtained in each iteration of the testing process. Table 2 shows the accuracy for each feature when trained using mentioned classifiers along with the accuracy when all the features are used along with the overall accuracy. Support vector machine with radial basis function kernel and extra tree classifier performs better than other classifiers and yields 69.3% and 67.8% accuracy respectively. The reason kernel SVM yields the best result is because the number of observations is greator than the number of features BIBREF20 . N-grams proved to be the most efficient in all classification models followed by common words and hastags. Bag-of-words feature performed the worst in SVM, random forest and extra tree classifier but yielded better result in naive bayes classifiers. Accuracies mentioned in table 2 were calculated using fine tuning of model parameters using grid search."
],
[
"In this paper, we describe a freely available corpus of 3453 English-Hindi code-mixed tweets. The tweets are annotated with humorous(H) and non-humorous(N) tags along with the language tags at the word level. The task of humor identification in social media texts is analyzed as a classification problem and several machine learning classification models are used. The features used in our classification system are n-grams, bag-of-words, common words and hashtags. N-grams when trained with support vector machines with radial basis function kernel performed better than other features and yielded an accuracy of 68.5%. The best accuracy (69.3%) was given by support vector machines with radial basis function kernel.",
"This paper describes the initial efforts in automatic humor detection in code-mixed social media texts. Corpus can be annotated with part-of-speech tags at the word level which may yield better results in language detection. Moreover, the dataset can be further extended to include tweets from other domains. Code-mixing is very common phenomenon on social media and it is prevalent mostly in multilingual regions. It would be interesting to experiment with code-mixed texts consisting of more than two languages in which the issue of transliteration exists like Arabic, Greek and South Asian languages. Comparing training with code-mixed tweets with training with a merged corpus of monolingual tweets in English and Hindi could be an interesting future work."
]
],
"section_name": [
"Introduction",
"Corpus Creation and Annotation",
"Data Collection",
"Humor Annotation",
"Language Annotation",
"Annotation Scheme",
"Error Analysis",
"System Architecture",
"Pre-processing of Corpus",
"Classification Features",
"Classification Approach and Results",
"Conclusion and Future Work"
]
} | {
"answers": [
{
"annotation_id": [
"13e54e0ce618cb9eb50cb4a43e2762dc9037063e",
"ec79426745ae285a757c2869d07c08123d7cd1b9"
],
"answer": [
{
"evidence": [
"We experimented with four different classifiers, namely, support vector machine BIBREF18 , random forest, extra tree and naive bayes classifier BIBREF19 . Chi square feature selection algorithm is applied to reduces the size of our feature vector. For training our system classifier, we used Scikit-learn BIBREF19 ."
],
"extractive_spans": [
"support vector machine BIBREF18 , random forest, extra tree and naive bayes classifier BIBREF19"
],
"free_form_answer": "",
"highlighted_evidence": [
"We experimented with four different classifiers, namely, support vector machine BIBREF18 , random forest, extra tree and naive bayes classifier BIBREF19 . Chi square feature selection algorithm is applied to reduces the size of our feature vector. For training our system classifier, we used Scikit-learn BIBREF19 ."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We experimented with four different classifiers, namely, support vector machine BIBREF18 , random forest, extra tree and naive bayes classifier BIBREF19 . Chi square feature selection algorithm is applied to reduces the size of our feature vector. For training our system classifier, we used Scikit-learn BIBREF19 .",
"In this paper, we describe a freely available corpus of 3453 English-Hindi code-mixed tweets. The tweets are annotated with humorous(H) and non-humorous(N) tags along with the language tags at the word level. The task of humor identification in social media texts is analyzed as a classification problem and several machine learning classification models are used. The features used in our classification system are n-grams, bag-of-words, common words and hashtags. N-grams when trained with support vector machines with radial basis function kernel performed better than other features and yielded an accuracy of 68.5%. The best accuracy (69.3%) was given by support vector machines with radial basis function kernel."
],
"extractive_spans": [],
"free_form_answer": "Classification system use n-grams, bag-of-words, common words and hashtags as features and SVM, random forest, extra tree and NB classifiers.",
"highlighted_evidence": [
"We experimented with four different classifiers, namely, support vector machine BIBREF18 , random forest, extra tree and naive bayes classifier BIBREF19 . ",
"The features used in our classification system are n-grams, bag-of-words, common words and hashtags"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"ddd51ad4ea34a5f9f437c8d93336939ff6b31095"
],
"answer": [
{
"evidence": [
"In this paper, we describe a freely available corpus of 3453 English-Hindi code-mixed tweets. The tweets are annotated with humorous(H) and non-humorous(N) tags along with the language tags at the word level. The task of humor identification in social media texts is analyzed as a classification problem and several machine learning classification models are used. The features used in our classification system are n-grams, bag-of-words, common words and hashtags. N-grams when trained with support vector machines with radial basis function kernel performed better than other features and yielded an accuracy of 68.5%. The best accuracy (69.3%) was given by support vector machines with radial basis function kernel."
],
"extractive_spans": [
"task of humor identification in social media texts is analyzed as a classification problem"
],
"free_form_answer": "",
"highlighted_evidence": [
"The task of humor identification in social media texts is analyzed as a classification problem and several machine learning classification models are used."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"6fb50c41707d444f119b16f699f64f1b572b556c",
"d27ef36f45c0d0c02d3e93cbf921e71f46153c8b"
],
"answer": [
{
"evidence": [
"The final code-mixed tweets were forwarded to a group of three annotators who were university students and fluent in both English and Hindi. Approximately 60 hours were spent in tagging tweets for the presence of humor. Tweets which consisted of any anecdotes, fantasy, irony, jokes, insults were annotated as humorous whereas tweets stating any facts, dialogues or speech which did not contain amusement were put in non-humorous class. Following are some examples of code-mixed tweets in the corpus:"
],
"extractive_spans": [
"three "
],
"free_form_answer": "",
"highlighted_evidence": [
"The final code-mixed tweets were forwarded to a group of three annotators who were university students and fluent in both English and Hindi."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The final code-mixed tweets were forwarded to a group of three annotators who were university students and fluent in both English and Hindi. Approximately 60 hours were spent in tagging tweets for the presence of humor. Tweets which consisted of any anecdotes, fantasy, irony, jokes, insults were annotated as humorous whereas tweets stating any facts, dialogues or speech which did not contain amusement were put in non-humorous class. Following are some examples of code-mixed tweets in the corpus:",
"Annotators were given certain guidelines to decide whether a tweet was humorous or not. The context of the tweet could be found by searching about hashtag or keywords used in the tweet. Example (1) uses a hashtag `#WontGiveItBack' which was trending during the ICC cricket world cup 2015. Searching it on Google gave 435k results and the time of the tweet was after the final match of the tournament. So there is an observational humor in (1) as India won the world cup in 2011 and lost in 2015 , hence the tweet was classified as humorous. Any tweets stating any facts, news or reality were classified as non-humorous. There were many tweets which did not contain any hashtags, to understand the context of such tweets annotators selected some keywords from the tweet and searched them online. Example (2) contains a comment towards a political leader towards development and was categorized as non-humorous. Tweets containing normal jokes and funny quotes like in (3) and (4) were put in humorous category. There were some tweets like (5) which consists of poem or lines of a song but modified. Annotators were guided that if such tweets contains satire or any humoristic features, then it could be categorized as humorous otherwise not. There were some tweets which were typical to categorize like (5), hence it was left to the annotators to the best of their understanding. Based on the above guidelines annotators categorized the tweets. To measure inter annotator agreement we opted for Fleiss' Kappa BIBREF12 obtaining an agreement of 0.821 . Both humorous and non-humorous tweets in nearly balanced amount were selected to prepare the corpus. If we had included humorous tweets from one domain like sports and non humorous tweets from another domain like news then, it would have given high performance of classification BIBREF13 . To classify based on the semantics and not on the domain differences, we included both types of tweets from different domains. Many tweets contains a picture along with a caption. Sometimes a caption may not contain humor but combined with the picture, it can provide some degree of humor. Such tweets were removed from the corpus to make the corpus unimodal. In Figure 1, the first tweet, “Anurag Kashyap can never join AAP because ministers took oath `main kisi Anurag aur dwesh ke bina kaam karunga' ” (Anurag Kashyap can never join AAP because ministers took oath `I will work without any affection (Anurag in Hindi) and without hesitation (dwesh in Hindi)'), was classified as humorous. The second tweet, “#SakshiMalik take a bow! #proudIndian #Rio #Olympics #BronzeMedal #girlpower Hamaara khaata khul Gaya!” (#SakshiMalik take a bow! #proudIndian #Rio #Olympics #BronzeMedal #girlpower Our account opened!) was classified as non-humorous as it contains a pride statement."
],
"extractive_spans": [
"three annotators"
],
"free_form_answer": "",
"highlighted_evidence": [
"The final code-mixed tweets were forwarded to a group of three annotators who were university students and fluent in both English and Hindi.",
"To measure inter annotator agreement we opted for Fleiss' Kappa BIBREF12 obtaining an agreement of 0.821 ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"b5deabd8b8f493a0e9e520bf6ed49cc2c65d67ee",
"ba5acf86ca3c502ccab9f24ab4cbade8fadcf10b"
],
"answer": [
{
"evidence": [
"Python package twitterscraper is used to scrap tweets from twitter. 10,478 tweets from the past two years from domains like `sports', `politics', `entertainment' were extracted. Among those tweets, we manually removed the tweets which were written either in English or Hindi entirely. There were 4161 tweets written in English and 2774 written in Hindi. Finally, a total of 3543 English-Hindi code-mixed tweets were collected. Table 1 describes the number of tweets and words in each category."
],
"extractive_spans": [
"tweets from the past two years from domains like `sports', `politics', `entertainment'"
],
"free_form_answer": "",
"highlighted_evidence": [
"Python package twitterscraper is used to scrap tweets from twitter. 10,478 tweets from the past two years from domains like `sports', `politics', `entertainment' were extracted."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Python package twitterscraper is used to scrap tweets from twitter. 10,478 tweets from the past two years from domains like `sports', `politics', `entertainment' were extracted. Among those tweets, we manually removed the tweets which were written either in English or Hindi entirely. There were 4161 tweets written in English and 2774 written in Hindi. Finally, a total of 3543 English-Hindi code-mixed tweets were collected. Table 1 describes the number of tweets and words in each category."
],
"extractive_spans": [
"twitter"
],
"free_form_answer": "",
"highlighted_evidence": [
"Python package twitterscraper is used to scrap tweets from twitter. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"What type of system does the baseline classification use?",
"What experiments were carried out on the corpus?",
"How many annotators tagged each text?",
"Where did the texts in the corpus come from?"
],
"question_id": [
"dea9e7fe8e47da5e7f31d9b1a46ebe34e731a596",
"955cbea7e5ead36fb89cd6229a97ccb3febcf8bc",
"04d1b3b41fb62a7b896afe55e0e8bc5ffb8c6e39",
"15cdd9ea4bae8891c1652da2ed34c87bbbd0edb8"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"search_query": [
"humor",
"humor",
"humor",
"humor"
],
"topic_background": [
"research",
"research",
"research",
"research"
]
} | {
"caption": [
"Table 1: Description of the corpus.",
"Figure 1: Example Annotations.",
"Table 2: Accuracy of each feature using different classifiers"
],
"file": [
"2-Table1-1.png",
"2-Figure1-1.png",
"4-Table2-1.png"
]
} | [
"What type of system does the baseline classification use?"
] | [
[
"1806.05513-Conclusion and Future Work-0",
"1806.05513-Classification Approach and Results-0"
]
] | [
"Classification system use n-grams, bag-of-words, common words and hashtags as features and SVM, random forest, extra tree and NB classifiers."
] | 153 |
1806.11432 | Using General Adversarial Networks for Marketing: A Case Study of Airbnb | In this paper, we examine the use case of general adversarial networks (GANs) in the field of marketing. In particular, we analyze how GAN models can replicate text patterns from successful product listings on Airbnb, a peer-to-peer online market for short-term apartment rentals. To do so, we define the Diehl-Martinez-Kamalu (DMK) loss function as a new class of functions that forces the model's generated output to include a set of user-defined keywords. This allows the general adversarial network to recommend a way of rewording the phrasing of a listing description to increase the likelihood that it is booked. Although we tailor our analysis to Airbnb data, we believe this framework establishes a more general model for how generative algorithms can be used to produce text samples for the purposes of marketing. | {
"paragraphs": [
[
"The development of online peer-to-peer markets in the 1990s, galvanized by the launch of sites like eBay, fundamentally shifted the way buyers and sellers could connect [4]. These new markets not only leveraged technology to allow for faster transaction speeds, but in the process also exposed a variety of unprecedented market-designs [4].",
"Today, many of the most well-known peer-to-peer markets like Uber and Instacart use a centralized system that matches workers with assigned tasks via a series of complex algorithms [4]. Still, a number of other websites like Airbnb and eBay rely on sellers and buyers to organically find one-another in a decentralized fashion. In the case of these decentralized systems, sellers are asked to price and market their products in order to attract potential buyers. Without a large marketing team at their disposal, however, sellers most often rely on their intuitions for how to present their articles or listings in the most appealing manner. Naturally, this leads to market inefficiencies, where willing sellers and buyers often fail to connect due to an inadequate presentation of the product or service offered."
],
[
"Fortunately, we believe that the introduction of unsupervised generative language models presents a way in which to tackle this particular shortcoming of peer-to-peer markets. In 2014, Ian Goodfellow et. al proposed the general adversarial network (GAN) [5]. The group showcased how this generative model could learn to artificially replicate data patterns to an unprecedented realistic degree [5]. Since then, these models have shown tremendous potential in their ability to generate photo-realistic images and coherent text samples [5].",
"The framework that GANs use for generating new data points employs an end-to-end neural network comprised of two models: a generator and a discriminator [5]. The generator is tasked with replicating the data that is fed into the model, without ever being directly exposed to the real samples. Instead, this model learns to reproduce the general patterns of the input via its interaction with the discriminator.",
"The role of the discriminator, in turn, is to tell apart which data points are ‘real’ and which have been created by the generator. On each run through the model, the generator then adapts its constructed output so as to more effectively ‘trick’ the discriminator into not being able to distinguish the real from the generated data. The end-to-end nature of the model then forces both the generator and discriminator to learn in parallel [7]. While GAN models have shown great potential in their ability to generate realistic data samples, they are notoriously difficult to train. This difficulty arises from two-parts: 1) First, it is difficult to tune the hyper-parameters correctly for the adversarial model to continue learning throughout all of the training epochs [5]. Since both the discriminator and generator are updated via the same gradient, it is very common for the model to fall into a local minima before completing all of the defined training cycles. 2) GANs are computationally expensive to train, given that both models are updated on each cycle in parallel [5]. This compounds the difficulty of tuning the model’s parameters.",
"Nonetheless, GANs have continued to show their value particularly in the domain of text-generation. Of particular interest for our purposes, Radford et al. propose synthesizing images from text descriptions [3]. The group demonstrates how GANs can produce images that correspond to a user-defined text description. It thus seems feasible that by using a similar model, we can produce text samples that are conditioned upon a set of user-specified keywords.",
"We were similarly influenced by the work of Radford et. al, who argue for the importance of layer normalization and data-specific trained word embeddings for text generation [9] and sentiment analysis categorization. These findings lead us to question whether it is possible to employ recurrent neural networks with long short-term memory gates, as defined by Mikolov et al., to categorize product descriptions into categories based on the product's popularity [6]."
],
[
"The data for the project was acquired from Airdna, a data processing service that collaborates with Airbnb to produce high-accuracy data summaries for listings in geographic regions of the United States. For the sake of simplicity, we focus our analysis on Airbnb listings from Manhattan, NY, during the time period of January 1, 2016, to January 1, 2017. The data provided to us contained information for roughly 40,000 Manhattan listings that were posted on Airbnb during this defined time period. For each listing, we were given information of the amenities of the listing (number of bathrooms, number of bedrooms …), the listing’s zip code, the host’s description of the listing, the price of the listing, and the occupancy rate of the listing. Airbnb defines a home's occupancy rate, as the percentage of time that a listing is occupied over the time period that the listing is available. This gives us a reasonable metric for defining popular versus less popular listings."
],
[
"Prior to building our generative model, we sought to gain a better understanding of how less and more popular listing descriptions differed in their writing style. We defined a home’s popularity via its occupancy rate metric, which we describe in the Data section. Using this popularity heuristic, we first stratified our dataset into groupings of listings at similar price points (i.e. $0-$30, $30-$60, ...). Importantly, rather than using the home’s quoted price, we relied on the price per bedroom as a better metric for the cost of the listing. Having clustered our listings into these groupings, we then selected the top third of listings by occupancy rate, as part of the ‘high popularity’ group. Listings in the middle and lowest thirds by occupancy rate were labeled ‘medium popularity’ and ‘low popularity’ respectively. We then combined all of the listings with high/medium/low popularity together for our final data set."
],
[
"Using our cleaned data set, we now built a recurrent neural network (RNN) with long short-term memory gates (LSTM). Our RNN/LSTM is trained to predict, given a description, whether a home corresponds to a high/medium/low popularity listing. The architecture of the RNN/LSTM employs Tensorflow’s Dynamic RNN package. Each sentence input is first fed into an embedding layer, where the input’s text is converted to a GloVe vector. These GloVe vectors are learned via a global word-word co-occurrence matrix using our corpus of Airbnb listing descriptions [8]. At each time step, the GloVe vectors are then fed into an LSTM layer. For each layer, the model forward propagates the output of the LSTM layer to the next time-step’s LSTM layer via a rectified linear unit (RLU) activation function. Each layer also pipes the output of the LSTM through a cross-entropy operation, to predict, for each time-step, the category of the input sequence. We finally ensemble these predictions, to create the model’s complete output prediction."
],
[
"Having observed an unideal performance on this task (see Experiments below), we turned our attention to building a model that can replicate the writing style of high popularity listing descriptions. To solve this task, we designed a framework for a general adversarial network. This model employs the standard set up of a generator and a discriminator, but extends the framework with the adoption of the Diehl-Martinez-Kamalu loss.",
"The generator is designed as a feed-forward neural network with three layers of depth. The input to the generator is simply a vector of random noise. This input is then fed directly to the first hidden layer via a linear transformation. Between the first and second layer we apply an exponential linear unit (ELU) as a non-linear activation function. Our reasoning for doing so is based on findings by Dash et al. that the experimental accuracy of ELUs over rectified linear units (RLU) tends to be somewhat higher for generative tasks [3]. Then, to scale the generator’s output to be in the range 0-1, we apply a sigmoid non-linearity between the second and the third layer of the model.",
"The discriminator similarly used a feed-forward structure with three layers of depth. The input to the discriminator comes from two sources: real data fed directly into the discriminator and the data generated by the generator. This input is then piped into the first hidden layer. As before, an ELU transformation is then applied between the first and second layer, as well as between the second and third hidden layers. Finally, a sigmoid activation is used on the output of the last hidden layer. This sigmoid activation is important since the output of our discriminator is a binary boolean that indicates whether the discriminator believes the input to have been real data or data produced by the generator. This discriminator is thus trained to minimize the binary cross-entropy loss of its prediction (whether the data was real or fake) and the real ground-truth of each data point.",
"The general framework defined above was inspired largely by the open-source code of Nag Dev, and was built using Pytorch [7]. One key extension to the basic GAN model, however, is the loss function that we apply to the generator, namely the Diehl-Martinez-Kamalu (DMK) Loss which we define below.",
"The Diehl-Martinez-Kamalu Loss is a weighted combination of a binary cross entropy loss with a dot-product attention metric of each user-defined keyword with the model’s generated output. Formally, the binary cross entropy (BCE) loss for one example is defined as: $ BCE(x,y) = y \\cdot logx + (1-y) \\cdot log(1-x), $ ",
"where x is defined as the predicted label for each sample and y is the true label (i.e. real versus fake data). The DMK loss then calculates an additional term, which corresponds to the dot-product attention of each word in the generated output with each keyword specified by the user. To illustrate by example, say a user desires the generated output to contain the keywords, $\\lbrace subway, manhattan\\rbrace $ . The model then converts each of these keywords to their corresponding glove vectors. Let us define the following notation $e(‘apple’)$ is the GloVe representation of the word apple, and let us suppose that $g$ is the vector of word embeddings generated by the generator. That is, $g_1$ is the first word embedding of the generator’s output. Let us also suppose $k$ is a vector of the keywords specified by the user. In our examples, $k$ is always in $R^{1}$ with $k_1$ one equaling of $‘subway’$ or $‘parking’$ . The dot-product term of the DMK loss then calculates $e(‘apple’)$0 . Weighing this term by some hyper-parameter, $e(‘apple’)$1 , then gives us the entire definition of the DMK loss: $e(‘apple’)$2 $e(‘apple’)$3 "
],
[
"In seeking to answer the question of whether the occupancy rate of a listing could be extracted from the listing’s summary, we ran a number of experiments on our first model. Two parameterizations which we present here are (1) whether the word vectors used in the embedding layer are trained on our corpus or come pretrained from Wikipedia and Gigaword and (2) whether ensembling or the final hidden state in isolation are used to make a prediction for the sequence. Common to all experiments was our decision to use an Adam optimizer, 16 LSTM units, 50-dimensional GloVe vectors, and a 70-30 split in train and test data.",
"Over ten epochs, the model parameterization which performs the best uses GloVe vectors trained on a corpus consisting of all listing descriptions and ensembling to make its class prediction. As a result, our findings are well in-line with those presented by Radford et. al who underscore the importance of training word embeddings on a data-specific corpus for best results on generative tasks [9].",
"That said, these results, though they do show a marginal increase in dev accuracy and a decrease in CE loss, suggest that perhaps listing description is not too predictive of occupancy rate given our parameterizations. While the listing description is surely an influential metric in determining the quality of a listing, other factors such as location, amenities, and home type might play a larger role in the consumer's decision. We were hopeful that these factors would be represented in the price per bedroom of the listing – our control variable – but the relationship may not have been strong enough.",
"However, should a strong relationship actually exist and there be instead a problem with our method, there are a few possibilities of what went wrong. We assumed that listings with similar occupancy rates would have similar listing descriptions regardless of price, which is not necessarily a strong assumption. This is coupled with an unexpected sparseness of clean data. With over 40,000 listings, we did not expect to see such poor attention to orthography in what are essentially public advertisements of the properties. In this way, our decision to use a window size of 5, a minimum occurrence count of 2, and a dimensionality of 50 when training our GloVe vectors was ad hoc.",
"Seeking to create a model which could generate and discriminate a “high-occupancy listing description”, we wanted to evaluate the capabilities of a generative adversarial network trained on either the standard binary cross-entropy loss or the DMK loss proposed above. Common to both models was the decision to alternate between training the generator for 50 steps and the discriminator for 2000 steps. We leave further tuning of the models to future research as each occasionally falls into unideal local optima within 20 iterations. One potential culprit is the step imbalance between generator and discriminator – should either learn at a much faster rate than the other, one component is liable to be “defeated\" and cease to learn the training data.",
"Qualitatively the network trained on the DMK loss shows great promise. With respect to the two experiments presented here, we have shown that it is possible to introduce a measure of suggestion in the text produced by the generator. While this model is also subject to a rapid deadlock between generator and discriminator, it is interesting to see how the introduction of keywords is gradual and affects the proximal tokens included in the output. This behavior was made possible by paying close attention to the hyperparameter $\\gamma $ , the weight given to the dot product attention term of the DMK loss. After manual tuning, we settle on $\\gamma =0.00045$ for this weight. Below, we illustrate model outputs using different values of Gamma. As is apparent, for a hyper-parameter value less than roughly $\\gamma = 0.0004$ , the model tends to ignore the importance of the keyword weights. Conversely, with a $\\gamma $ value higher than $0.0005$ , the model tends towards overweighting the representation of the keywords in the model output."
],
[
"We hope that this research paper establishes a first attempt at using generative machine learning models for the purposes of marketing on peer-to-peer platforms. As the use of these markets becomes more widespread, the use of self-branding and marketing tools will grow in importance. The development of tools like the DMK loss in combination with GANs demonstrates the enormous potential these frameworks can have in solving problems that inevitably arise on peer-to-peer platforms. Certainly, however, more works remains to be done in this field, and recent developments in unsupervised generative models already promise possible extensions to our analysis.",
"For example, since the introduction of GANs, research groups have studied how variational autoencoders can be incorporated into these models, to increase the clarity and accuracy of the models’ outputs. Wang et al. in particular demonstrate how using a variational autoencoder in the generator model can increase the accuracy of generated text samples [11]. Most promisingly, the data used by the group was a large data set of Amazon customer reviews, which in many ways parallels the tone and semantic structure of Airbnb listing descriptions. More generally, Bowman et al. demonstrate how the use of variational autoencoders presents a more accurate model for text generation, as compared to standard recurrent neural network language models [1].",
"For future work, we may also consider experimenting with different forms of word embeddings, such as improved word vectors (IWV) which were suggested by Rezaeinia et al [10]. These IMV word embeddings are trained in a similar fashion to GloVe vectors, but also encode additional information about each word, like the word’s part of speech. For our model, we may consider encoding similar information in order for the generator to more easily learn common semantic patterns inherent to the marketing data being analyzed."
]
],
"section_name": [
"Introduction",
"Background",
"Data",
"Approach",
"Recurrent Neural Network with Long Short-Term Memory Gates",
"Generative Adversarial Network",
"Experiments",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"4378a1930ff3082042e4ffe7da3c6abb695a4efa",
"deedc0527be9f191c908b0176078ecf8b8a76aec"
],
"answer": [
{
"evidence": [
"Nonetheless, GANs have continued to show their value particularly in the domain of text-generation. Of particular interest for our purposes, Radford et al. propose synthesizing images from text descriptions [3]. The group demonstrates how GANs can produce images that correspond to a user-defined text description. It thus seems feasible that by using a similar model, we can produce text samples that are conditioned upon a set of user-specified keywords.",
"where x is defined as the predicted label for each sample and y is the true label (i.e. real versus fake data). The DMK loss then calculates an additional term, which corresponds to the dot-product attention of each word in the generated output with each keyword specified by the user. To illustrate by example, say a user desires the generated output to contain the keywords, $\\lbrace subway, manhattan\\rbrace $ . The model then converts each of these keywords to their corresponding glove vectors. Let us define the following notation $e(‘apple’)$ is the GloVe representation of the word apple, and let us suppose that $g$ is the vector of word embeddings generated by the generator. That is, $g_1$ is the first word embedding of the generator’s output. Let us also suppose $k$ is a vector of the keywords specified by the user. In our examples, $k$ is always in $R^{1}$ with $k_1$ one equaling of $‘subway’$ or $‘parking’$ . The dot-product term of the DMK loss then calculates $e(‘apple’)$0 . Weighing this term by some hyper-parameter, $e(‘apple’)$1 , then gives us the entire definition of the DMK loss: $e(‘apple’)$2 $e(‘apple’)$3"
],
"extractive_spans": [],
"free_form_answer": "Words that a user wants them to appear in the generated output.",
"highlighted_evidence": [
"It thus seems feasible that by using a similar model, we can produce text samples that are conditioned upon a set of user-specified keywords.",
"To illustrate by example, say a user desires the generated output to contain the keywords, $\\lbrace subway, manhattan\\rbrace $ . "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The data for the project was acquired from Airdna, a data processing service that collaborates with Airbnb to produce high-accuracy data summaries for listings in geographic regions of the United States. For the sake of simplicity, we focus our analysis on Airbnb listings from Manhattan, NY, during the time period of January 1, 2016, to January 1, 2017. The data provided to us contained information for roughly 40,000 Manhattan listings that were posted on Airbnb during this defined time period. For each listing, we were given information of the amenities of the listing (number of bathrooms, number of bedrooms …), the listing’s zip code, the host’s description of the listing, the price of the listing, and the occupancy rate of the listing. Airbnb defines a home's occupancy rate, as the percentage of time that a listing is occupied over the time period that the listing is available. This gives us a reasonable metric for defining popular versus less popular listings.",
"where x is defined as the predicted label for each sample and y is the true label (i.e. real versus fake data). The DMK loss then calculates an additional term, which corresponds to the dot-product attention of each word in the generated output with each keyword specified by the user. To illustrate by example, say a user desires the generated output to contain the keywords, $\\lbrace subway, manhattan\\rbrace $ . The model then converts each of these keywords to their corresponding glove vectors. Let us define the following notation $e(‘apple’)$ is the GloVe representation of the word apple, and let us suppose that $g$ is the vector of word embeddings generated by the generator. That is, $g_1$ is the first word embedding of the generator’s output. Let us also suppose $k$ is a vector of the keywords specified by the user. In our examples, $k$ is always in $R^{1}$ with $k_1$ one equaling of $‘subway’$ or $‘parking’$ . The dot-product term of the DMK loss then calculates $e(‘apple’)$0 . Weighing this term by some hyper-parameter, $e(‘apple’)$1 , then gives us the entire definition of the DMK loss: $e(‘apple’)$2 $e(‘apple’)$3"
],
"extractive_spans": [],
"free_form_answer": "terms common to hosts' descriptions of popular Airbnb properties, like 'subway', 'manhattan', or 'parking'",
"highlighted_evidence": [
"For each listing, we were given information of the amenities of the listing (number of bathrooms, number of bedrooms …), the listing’s zip code, the host’s description of the listing, the price of the listing, and the occupancy rate of the listing.",
"To illustrate by example, say a user desires the generated output to contain the keywords, $\\lbrace subway, manhattan\\rbrace $ .",
"Let us also suppose $k$ is a vector of the keywords specified by the user. In our examples, $k$ is always in $R^{1}$ with $k_1$ one equaling of $‘subway’$ or $‘parking’$ . "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"043654eefd60242ac8da08ddc1d4b8d73f86f653"
]
},
{
"annotation_id": [
"1fbdb7997553865a594b9366b87aaf3b74c85bee",
"a814dd6d1fc6669f35b1701e27847e2868c53c33"
],
"answer": [
{
"evidence": [
"That said, these results, though they do show a marginal increase in dev accuracy and a decrease in CE loss, suggest that perhaps listing description is not too predictive of occupancy rate given our parameterizations. While the listing description is surely an influential metric in determining the quality of a listing, other factors such as location, amenities, and home type might play a larger role in the consumer's decision. We were hopeful that these factors would be represented in the price per bedroom of the listing – our control variable – but the relationship may not have been strong enough.",
"However, should a strong relationship actually exist and there be instead a problem with our method, there are a few possibilities of what went wrong. We assumed that listings with similar occupancy rates would have similar listing descriptions regardless of price, which is not necessarily a strong assumption. This is coupled with an unexpected sparseness of clean data. With over 40,000 listings, we did not expect to see such poor attention to orthography in what are essentially public advertisements of the properties. In this way, our decision to use a window size of 5, a minimum occurrence count of 2, and a dimensionality of 50 when training our GloVe vectors was ad hoc.",
"FLOAT SELECTED: Table 2: GAN Model, Keywords = [parking], Varying Gamma Parameter"
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"That said, these results, though they do show a marginal increase in dev accuracy and a decrease in CE loss, suggest that perhaps listing description is not too predictive of occupancy rate given our parameterizations. ",
"However, should a strong relationship actually exist and there be instead a problem with our method, there are a few possibilities of what went wrong.",
"FLOAT SELECTED: Table 2: GAN Model, Keywords = [parking], Varying Gamma Parameter"
],
"unanswerable": false,
"yes_no": false
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"043654eefd60242ac8da08ddc1d4b8d73f86f653",
"5c4ed7751ffe32c903ca9e8c7eee49ba66e4336b"
]
},
{
"annotation_id": [
"2ef2e1e2a5e321b48c2b7bcda6fc3d06e3af65de",
"c887f03756d92922edcfa96ee011fcbc6765e27b"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 1: Results of RNN/LSTM"
],
"extractive_spans": [],
"free_form_answer": "GloVe vectors trained on Wikipedia Corpus with ensembling, and GloVe vectors trained on Airbnb Data without ensembling",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Results of RNN/LSTM"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"043654eefd60242ac8da08ddc1d4b8d73f86f653",
"5c4ed7751ffe32c903ca9e8c7eee49ba66e4336b"
]
},
{
"annotation_id": [
"467d6904cf8ee5fd1c20243e1299833165dcac42"
],
"answer": [
{
"evidence": [
"The data for the project was acquired from Airdna, a data processing service that collaborates with Airbnb to produce high-accuracy data summaries for listings in geographic regions of the United States. For the sake of simplicity, we focus our analysis on Airbnb listings from Manhattan, NY, during the time period of January 1, 2016, to January 1, 2017. The data provided to us contained information for roughly 40,000 Manhattan listings that were posted on Airbnb during this defined time period. For each listing, we were given information of the amenities of the listing (number of bathrooms, number of bedrooms …), the listing’s zip code, the host’s description of the listing, the price of the listing, and the occupancy rate of the listing. Airbnb defines a home's occupancy rate, as the percentage of time that a listing is occupied over the time period that the listing is available. This gives us a reasonable metric for defining popular versus less popular listings."
],
"extractive_spans": [
"roughly 40,000 Manhattan listings"
],
"free_form_answer": "",
"highlighted_evidence": [
"For the sake of simplicity, we focus our analysis on Airbnb listings from Manhattan, NY, during the time period of January 1, 2016, to January 1, 2017. The data provided to us contained information for roughly 40,000 Manhattan listings that were posted on Airbnb during this defined time period. For each listing, we were given information of the amenities of the listing (number of bathrooms, number of bedrooms …), the listing’s zip code, the host’s description of the listing, the price of the listing, and the occupancy rate of the listing. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"What are the user-defined keywords?",
"Does the method achieve sota performance on this dataset?",
"What are the baselines used in the paper?",
"What is the size of the Airbnb?"
],
"question_id": [
"506d21501d54a12d0c9fd3dbbf19067802439a04",
"701571680724c05ca70c11bc267fb1160ea1460a",
"600b097475b30480407ce1de81c28c54a0b3b2f8",
"ee7e9a948ee6888aa5830b1a3d0d148ff656d864"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: RNN/LSTM Framework",
"Figure 2: GAN Framework",
"Figure 3: RNN/LSTM Accuracy over Number of Epochs",
"Figure 4: RNN/LSTM Loss over Number of Epochs",
"Table 1: Results of RNN/LSTM",
"Table 2: GAN Model, Keywords = [parking], Varying Gamma Parameter"
],
"file": [
"3-Figure1-1.png",
"4-Figure2-1.png",
"5-Figure3-1.png",
"5-Figure4-1.png",
"6-Table1-1.png",
"6-Table2-1.png"
]
} | [
"What are the user-defined keywords?",
"What are the baselines used in the paper?"
] | [
[
"1806.11432-Background-3",
"1806.11432-Data-0"
],
[
"1806.11432-6-Table1-1.png"
]
] | [
"terms common to hosts' descriptions of popular Airbnb properties, like 'subway', 'manhattan', or 'parking'",
"GloVe vectors trained on Wikipedia Corpus with ensembling, and GloVe vectors trained on Airbnb Data without ensembling"
] | 155 |
1910.14537 | Attention Is All You Need for Chinese Word Segmentation | This paper presents a fast and accurate Chinese word segmentation (CWS) model with only unigram feature and greedy decoding algorithm. Our model uses only attention mechanism for network block building. In detail, we adopt a Transformer-based encoder empowered by self-attention mechanism as backbone to take input representation. Then we extend the Transformer encoder with our proposed Gaussian-masked directional multi-head attention, which is a variant of scaled dot-product attention. At last, a bi-affinal attention scorer is to make segmentation decision in a linear time. Our model is evaluated on SIGHAN Bakeoff benchmark dataset. The experimental results show that with the highest segmentation speed, the proposed attention-only model achieves new state-of-the-art or comparable performance against strong baselines in terms of closed test setting. | {
"paragraphs": [
[
"Chinese word segmentation (CWS) is a task for Chinese natural language process to delimit word boundary. CWS is a basic and essential task for Chinese which is written without explicit word delimiters and different from alphabetical languages like English. BIBREF0 treats Chinese word segmentation (CWS) as a sequence labeling task with character position tags, which is followed by BIBREF1, BIBREF2, BIBREF3. Traditional CWS models depend on the design of features heavily which effects the performance of model. To minimize the effort in feature engineering, some CWS models BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10, BIBREF11 are developed following neural network architecture for sequence labeling tasks BIBREF12. Neural CWS models perform strong ability of feature representation, employing unigram and bigram character embedding as input and approach good performance.",
"The CWS task is often modeled as one graph model based on a scoring model that means it is composed of two parts, one part is an encoder which is used to generate the representation of characters from the input sequence, the other part is a decoder which performs segmentation according to the encoder scoring. Table TABREF1 summarizes typical CWS models according to their decoding ways for both traditional and neural models. Markov models such as BIBREF13 and BIBREF4 depend on the maximum entropy model or maximum entropy Markov model both with a Viterbi decoder. Besides, conditional random field (CRF) or Semi-CRF for sequence labeling has been used for both traditional and neural models though with different representations BIBREF2, BIBREF15, BIBREF10, BIBREF17, BIBREF18. Generally speaking, the major difference between traditional and neural network models is about the way to represent input sentences.",
"Recent works about neural CWS which focus on benchmark dataset, namely SIGHAN Bakeoff BIBREF21, may be put into the following three categories roughly.",
"Encoder. Practice in various natural language processing tasks has been shown that effective representation is essential to the performance improvement. Thus for better CWS, it is crucial to encode the input character, word or sentence into effective representation. Table TABREF2 summarizes regular feature sets for typical CWS models including ours as well. The building blocks that encoders use include recurrent neural network (RNN) and convolutional neural network (CNN), and long-term memory network (LSTM).",
"Graph model. As CWS is a kind of structure learning task, the graph model determines which type of decoder should be adopted for segmentation, also it may limit the capability of defining feature, as shown in Table 2, not all graph models can support the word features. Thus recent work focused on finding more general or flexible graph model to make model learn the representation of segmentation more effective as BIBREF9, BIBREF11.",
"External data and pre-trained embedding. Whereas both encoder and graph model are about exploring a way to get better performance only by improving the model strength itself. Using external resource such as pre-trained embeddings or language representation is an alternative for the same purpose BIBREF22, BIBREF23. SIGHAN Bakeoff defines two types of evaluation settings, closed test limits all the data for learning should not be beyond the given training set, while open test does not take this limitation BIBREF21. In this work, we will focus on the closed test setting by finding a better model design for further CWS performance improvement.",
"Shown in Table TABREF1, different decoders have particular decoding algorithms to match the respective CWS models. Markov models and CRF-based models often use Viterbi decoders with polynomial time complexity. In general graph model, search space may be too large for model to search. Thus it forces graph models to use an approximate beam search strategy. Beam search algorithm has a kind low-order polynomial time complexity. Especially, when beam width $b$=1, the beam search algorithm will reduce to greedy algorithm with a better time complexity $O(Mn)$ against the general beam search time complexity $O(Mnb^2)$, where $n$ is the number of units in one sentences, $M$ is a constant representing the model complexity. Greedy decoding algorithm can bring the fastest speed of decoding while it is not easy to guarantee the precision of decoding when the encoder is not strong enough.",
"In this paper, we focus on more effective encoder design which is capable of offering fast and accurate Chinese word segmentation with only unigram feature and greedy decoding. Our proposed encoder will only consist of attention mechanisms as building blocks but nothing else. Motivated by the Transformer BIBREF24 and its strength of capturing long-range dependencies of input sentences, we use a self-attention network to generate the representation of input which makes the model encode sentences at once without feeding input iteratively. Considering the weakness of the Transformer to model relative and absolute position information directly BIBREF25 and the importance of localness information, position information and directional information for CWS, we further improve the architecture of standard multi-head self-attention of the Transformer with a directional Gaussian mask and get a variant called Gaussian-masked directional multi-head attention. Based on the newly improved attention mechanism, we expand the encoder of the Transformer to capture different directional information. With our powerful encoder, our model uses only simple unigram features to generate representation of sentences.",
"For decoder which directly performs the segmentation, we use the bi-affinal attention scorer, which has been used in dependency parsing BIBREF26 and semantic role labeling BIBREF27, to implement greedy decoding on finding the boundaries of words. In our proposed model, greedy decoding ensures a fast segmentation while powerful encoder design ensures a good enough segmentation performance even working with greedy decoder together. Our model will be strictly evaluated on benchmark datasets from SIGHAN Bakeoff shared task on CWS in terms of closed test setting, and the experimental results show that our proposed model achieves new state-of-the-art.",
"The technical contributions of this paper can be summarized as follows.",
"We propose a CWS model with only attention structure. The encoder and decoder are both based on attention structure.",
"With a powerful enough encoder, we for the first time show that unigram (character) featues can help yield strong performance instead of diverse $n$-gram (character and word) features in most of previous work.",
"To capture the representation of localness information and directional information, we propose a variant of directional multi-head self-attention to further enhance the state-of-the-art Transformer encoder."
],
[
"The CWS task is often modelled as one graph model based on an encoder-based scoring model. The model for CWS task is composed of an encoder to represent the input and a decoder based on the encoder to perform actual segmentation. Figure FIGREF6 is the architecture of our model. The model feeds sentence into encoder. Embedding captures the vector $e=(e_1,...,e_n)$ of the input character sequences of $c=(c_1,...,c_n)$. The encoder maps vector sequences of $ {e}=(e_1,..,e_n)$ to two sequences of vector which are $ {v^b}=(v_1^b,...,v_n^b)$ and ${v^f}=(v_1^f,...v_n^f)$ as the representation of sentences. With $v^b$ and $v^f$, the bi-affinal scorer calculates the probability of each segmentation gaps and predicts the word boundaries of input. Similar as the Transformer, the encoder is an attention network with stacked self-attention and point-wise, fully connected layers while our encoder includes three independent directional encoders."
],
[
"In the Transformer, the encoder is composed of a stack of N identical layers and each layer has one multi-head self-attention layer and one position-wise fully connected feed-forward layer. One residual connection is around two sub-layers and followed by layer normalization BIBREF24. This architecture provides the Transformer a good ability to generate representation of sentence.",
"With the variant of multi-head self-attention, we design a Gaussian-masked directional encoder to capture representation of different directions to improve the ability of capturing the localness information and position information for the importance of adjacent characters. One unidirectional encoder can capture information of one particular direction.",
"For CWS tasks, one gap of characters, which is from a word boundary, can divide one sequence into two parts, one part in front of the gap and one part in the rear of it. The forward encoder and backward encoder are used to capture information of two directions which correspond to two parts divided by the gap.",
"One central encoder is paralleled with forward and backward encoders to capture the information of entire sentences. The central encoder is a special directional encoder for forward and backward information of sentences. The central encoder can fuse the information and enable the encoder to capture the global information.",
"The encoder outputs one forward information and one backward information of each positions. The representation of sentence generated by center encoder will be added to these information directly:",
"where $v^{b}=(v^b_1,...,v^b_n)$ is the backward information, $v^{f}=(v^f_1,...,v^f_n)$ is the forward information, $r^{b}=(r^b_1,...,r^b_n)$ is the output of backward encoder, $r^{c}=(r^c_1,...,r^c_n)$ is the output of center encoder and $r^{f}=(r^f_1,...,r^f_n)$ is the output of forward encoder."
],
[
"Similar as scaled dot-product attention BIBREF24, Gaussian-masked directional attention can be described as a function to map queries and key-value pairs to the representation of input. Here queries, keys and values are all vectors. Standard scaled dot-product attention is calculated by dotting query $Q$ with all keys $K$, dividing each values by $\\sqrt{d_k}$, where $\\sqrt{d_k}$ is the dimension of keys, and apply a softmax function to generate the weights in the attention:",
"Different from scaled dot-product attention, Gaussian-masked directional attention expects to pay attention to the adjacent characters of each positions and cast the localness relationship between characters as a fix Gaussian weight for attention. We assume that the Gaussian weight only relys on the distance between characters.",
"Firstly we introduce the Gaussian weight matrix $G$ which presents the localness relationship between each two characters:",
"where $g_{ij}$ is the Gaussian weight between character $i$ and $j$, $dis_{ij}$ is the distance between character $i$ and $j$, $\\Phi (x)$ is the cumulative distribution function of Gaussian, $\\sigma $ is the standard deviation of Gaussian function and it is a hyperparameter in our method. Equation (DISPLAY_FORM13) can ensure the Gaussian weight equals 1 when $dis_{ij}$ is 0. The larger distance between charactersis, the smaller the weight is, which makes one character can affect its adjacent characters more compared with other characters.",
"To combine the Gaussian weight to the self-attention, we produce the Hadamard product of Gaussian weight matrix $G$ and the score matrix produced by $Q{K^{T}}$",
"where $AG$ is the Gaussian-masked attention. It ensures that the relationship between two characters with long distances is weaker than adjacent characters.",
"The scaled dot-product attention models the relationship between two characters without regard to their distances in one sequence. For CWS task, the weight between adjacent characters should be more important while it is hard for self-attention to achieve the effect explicitly because the self-attention cannot get the order of sentences directly. The Gaussian-masked attention adjusts the weight between characters and their adjacent character to a larger value which stands for the effect of adjacent characters.",
"For forward and backward encoder, the self-attention sublayer needs to use a triangular matrix mask to let the self-attention focus on different weights:",
"where $pos_i$ is the position of character $c_i$. The triangular matrix for forward and backward encode are:",
"$\\left[ \\begin{matrix} 1 & 0 & 0 & \\cdots &0\\\\ 1 & 1 & 0 & \\cdots &0\\\\ 1 & 1 & 1 & \\cdots &0\\\\ \\vdots &\\vdots &\\vdots &\\ddots &\\vdots \\\\ 1 & 1 & 1 & \\cdots & 1\\\\ \\end{matrix} \\right]$ $\\left[ \\begin{matrix} 1 & 1 & 1 & \\cdots &1 \\\\ 0 & 1 & 1 & \\cdots &1 \\\\ 0 & 0& 1 & \\cdots &1 \\\\ \\vdots &\\vdots &\\vdots &\\ddots &\\vdots \\\\ 0 & 0 & 0 & \\cdots & 1\\\\ \\end{matrix}\\right]$",
"Similar as BIBREF24, we use multi-head attention to capture information from different dimension positions as Figure FIGREF16 and get Gaussian-masked directional multi-head attention. With multi-head attention architecture, the representation of input can be captured by",
"where $MH$ is the Gaussian-masked multi-head attention, ${W_i^q, W_i^k,W_i^v} \\in \\mathbb {R}^{d_k \\times d_h}$ is the parameter matrices to generate heads, $d_k$ is the dimension of model and $d_h$ is the dimension of one head."
],
[
"Regarding word boundaries as gaps between any adjacent words converts the character labeling task to the gap labeling task. Different from character labeling task, gap labeling task requires information of two adjacent characters. The relationship between adjacent characters can be represented as the type of gap. The characteristic of word boundaries makes bi-affine attention an appropriate scorer for CWS task.",
"Bi-affinal attention scorer is the component that we use to label the gap. Bi-affinal attention is developed from bilinear attention which has been used in dependency parsing BIBREF26 and SRL BIBREF27. The distribution of labels in a labeling task is often uneven which makes the output layer often include a fixed bias term for the prior probability of different labels BIBREF27. Bi-affine attention uses bias terms to alleviate the burden of the fixed bias term and get the prior probability which makes it different from bilinear attention. The distribution of the gap is uneven that is similar as other labeling task which fits bi-affine.",
"Bi-affinal attention scorer labels the target depending on information of independent unit and the joint information of two units. In bi-affinal attention, the score $s_{ij}$ of characters $c_i$ and $c_j$ $(i < j)$ is calculated by:",
"where $v_i^f$ is the forward information of $c_i$ and $v_i^b$ is the backward information of $c_j$. In Equation (DISPLAY_FORM21), $W$, $U$ and $b$ are all parameters that can be updated in training. $W$ is a matrix with shape $(d_i \\times N\\times d_j)$ and $U$ is a $(N\\times (d_i + d_j))$ matrix where $d_i$ is the dimension of vector $v_i^f$ and $N$ is the number of labels.",
"In our model, the biaffine scorer uses the forward information of character in front of the gap and the backward information of the character behind the gap to distinguish the position of characters. Figure FIGREF22 is an example of labeling gap. The method of using biaffine scorer ensures that the boundaries of words can be determined by adjacent characters with different directional information. The score vector of the gap is formed by the probability of being a boundary of word. Further, the model generates all boundaries using activation function in a greedy decoding way."
],
[
"We train and evaluate our model on datasets from SIGHAN Bakeoff 2005 BIBREF21 which has four datasets, PKU, MSR, AS and CITYU. Table TABREF23 shows the statistics of train data. We use F-score to evaluate CWS models. To train model with pre-trained embeddings in AS and CITYU, we use OpenCC to transfer data from traditional Chinese to simplified Chinese."
],
[
"We only use unigram feature so we only trained character embeddings. Our pre-trained embedding are pre-trained on Chinese Wikipedia corpus by word2vec BIBREF29 toolkit. The corpus used for pre-trained embedding is all transferred to simplified Chinese and not segmented. On closed test, we use embeddings initialized randomly."
],
[
"For different datasets, we use two kinds of hyperparameters which are presented in Table TABREF24. We use hyperparameters in Table TABREF24 for small corpora (PKU and CITYU) and normal corpora (MSR and AS). We set the standard deviation of Gaussian function in Equation (DISPLAY_FORM13) to 2. Each training batch contains sentences with at most 4096 tokens."
],
[
"To train our model, we use the Adam BIBREF30 optimizer with $\\beta _1=0.9$, $\\beta _2=0.98$ and $\\epsilon =10^{-9}$. The learning rate schedule is the same as BIBREF24:",
"where $d$ is the dimension of embeddings, $step$ is the step number of training and $warmup_step$ is the step number of warmup. When the number of steps is smaller than the step of warmup, the learning rate increases linearly and then decreases."
],
[
"We trained our models on a single CPU (Intel i7-5960X) with an nVidia 1080 Ti GPU. We implement our model in Python with Pytorch 1.0."
],
[
"Tables TABREF25 and TABREF26 reports the performance of recent models and ours in terms of closed test setting. Without the assistance of unsupervised segmentation features userd in BIBREF20, our model outperforms all the other models in MSR and AS except BIBREF18 and get comparable performance in PKU and CITYU. Note that all the other models for this comparison adopt various $n$-gram features while only our model takes unigram ones.",
"With unsupervised segmentation features introduced by BIBREF20, our model gets a higher result. Specially, the results in MSR and AS achieve new state-of-the-art and approaching previous state-of-the-art in CITYU and PKU. The unsupervised segmentation features are derived from the given training dataset, thus using them does not violate the rule of closed test of SIGHAN Bakeoff.",
"Table TABREF36 compares our model and recent neural models in terms of open test setting in which any external resources, especially pre-trained embeddings or language models can be used. In MSR and AS, our model gets a comparable result while our results in CITYU and PKU are not remarkable.",
"However, it is well known that it is always hard to compare models when using open test setting, especially with pre-trained embedding. Not all models may use the same method and data to pre-train. Though pre-trained embedding or language model can improve the performance, the performance improvement itself may be from multiple sources. It often that there is a success of pre-trained embedding to improve the performance, while it cannot prove that the model is better.",
"Compared with other LSTM models, our model performs better in AS and MSR than in CITYU and PKU. Considering the scale of different corpora, we believe that the size of corpus affects our model and the larger size is, the better model performs. For small corpus, the model tends to be overfitting.",
"Tables TABREF25 and TABREF26 also show the decoding time in different datasets. Our model finishes the segmentation with the least decoding time in all four datasets, thanks to the architecture of model which only takes attention mechanism as basic block."
],
[
"CWS is a task for Chinese natural language process to delimit word boundary. BIBREF0 for the first time formulize CWS as a sequence labeling task. BIBREF3 show that different character tag sets can make essential impact for CWS. BIBREF2 use CRFs as a model for CWS, achieving new state-of-the-art. Works of statistical CWS has built the basis for neural CWS.",
"Neural word segmentation has been widely used to minimize the efforts in feature engineering which was important in statistical CWS. BIBREF4 introduce the neural model with sliding-window based sequence labeling. BIBREF6 propose a gated recursive neural network (GRNN) for CWS to incorporate complicated combination of contextual character and n-gram features. BIBREF7 use LSTM to learn long distance information. BIBREF9 propose a neural framework that eliminates context windows and utilize complete segmentation history. BIBREF33 explore a joint model that performs segmentation, POS-Tagging and chunking simultaneously. BIBREF34 propose a feature-enriched neural model for joint CWS and part-of-speech tagging. BIBREF35 present a joint model to enhance the segmentation of Chinese microtext by performing CWS and informal word detection simultaneously. BIBREF17 propose a character-based convolutional neural model to capture $n$-gram features automatically and an effective approach to incorporate word embeddings. BIBREF11 improve the model in BIBREF9 and propose a greedy neural word segmenter with balanced word and character embedding inputs. BIBREF23 propose a novel neural network model to incorporate unlabeled and partially-labeled data. BIBREF36 propose two methods that extend the Bi-LSTM to perform incorporating dictionaries into neural networks for CWS. BIBREF37 propose Switch-LSTMs to segment words and provided a more flexible solution for multi-criteria CWS which is easy to transfer the learned knowledge to new criteria."
],
[
"Transformer BIBREF24 is an attention-based neural machine translation model. The Transformer is one kind of self-attention networks (SANs) which is proposed in BIBREF38. Encoder of the Transformer consists of one self-attention layer and a position-wise feed-forward layer. Decoder of the Transformer contains one self-attention layer, one encoder-decoder attention layer and one position-wise feed-forward layer. The Transformer uses residual connections around the sublayers and then followed by a layer normalization layer.",
"Scaled dot-product attention is the key component in the Transformer. The input of attention contains queries, keys, and values of input sequences. The attention is generated using queries and keys like Equation (DISPLAY_FORM11). Structure of scaled dot-product attention allows the self-attention layer generate the representation of sentences at once and contain the information of the sentence which is different from RNN that process characters of sentences one by one. Standard self-attention is similar as Gaussian-masked direction attention while it does not have directional mask and gaussian mask. BIBREF24 also propose multi-head attention which is better to generate representation of sentence by dividing queries, keys and values to different heads and get information from different subspaces."
],
[
"In this paper, we propose an attention mechanism only based Chinese word segmentation model. Our model uses self-attention from the Transformer encoder to take sequence input and bi-affine attention scorer to predict the label of gaps. To improve the ability of capturing the localness and directional information of self-attention based encoder, we propose a variant of self-attention called Gaussian-masked directional multi-head attention to replace the standard self-attention. We also extend the Transformer encoder to capture directional features. Our model uses only unigram features instead of multiple $n$-gram features in previous work. Our model is evaluated on standard benchmark dataset, SIGHAN Bakeoff 2005, which shows not only our model performs segmentation faster than any previous models but also gives new higher or comparable segmentation performance against previous state-of-the-art models."
]
],
"section_name": [
"Introduction",
"Models",
"Models ::: Encoder Stacks",
"Models ::: Gaussian-Masked Directional Multi-Head Attention",
"Models ::: Bi-affinal Attention Scorer",
"Experiments ::: Experimental Settings ::: Data",
"Experiments ::: Experimental Settings ::: Pre-trained Embedding",
"Experiments ::: Experimental Settings ::: Hyperparameters",
"Experiments ::: Experimental Settings ::: Optimizer",
"Experiments ::: Hardware and Implements",
"Experiments ::: Results",
"Related Work ::: Chinese Word Segmentation",
"Related Work ::: Transformer",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"295935ac62b1810952ae7540d1362f10c2ec08f0",
"f8c9a61124e291a153278d8650a27dd05a818907"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 5: Results on PKU and MSR compared with previous models in closed test. The asterisks indicate the result of model with unsupervised label from (Wang et al., 2019).",
"FLOAT SELECTED: Table 6: Results on AS and CITYU compared with previous models in closed test. The asterisks indicate the result of model with unsupervised label from (Wang et al., 2019)."
],
"extractive_spans": [],
"free_form_answer": "F1 score of 97.5 on MSR and 95.7 on AS",
"highlighted_evidence": [
"FLOAT SELECTED: Table 5: Results on PKU and MSR compared with previous models in closed test. The asterisks indicate the result of model with unsupervised label from (Wang et al., 2019).",
"FLOAT SELECTED: Table 6: Results on AS and CITYU compared with previous models in closed test. The asterisks indicate the result of model with unsupervised label from (Wang et al., 2019)."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"With unsupervised segmentation features introduced by BIBREF20, our model gets a higher result. Specially, the results in MSR and AS achieve new state-of-the-art and approaching previous state-of-the-art in CITYU and PKU. The unsupervised segmentation features are derived from the given training dataset, thus using them does not violate the rule of closed test of SIGHAN Bakeoff.",
"FLOAT SELECTED: Table 6: Results on AS and CITYU compared with previous models in closed test. The asterisks indicate the result of model with unsupervised label from (Wang et al., 2019).",
"FLOAT SELECTED: Table 5: Results on PKU and MSR compared with previous models in closed test. The asterisks indicate the result of model with unsupervised label from (Wang et al., 2019)."
],
"extractive_spans": [],
"free_form_answer": "MSR: 97.7 compared to 97.5 of baseline\nAS: 95.7 compared to 95.6 of baseline",
"highlighted_evidence": [
" Specially, the results in MSR and AS achieve new state-of-the-art and approaching previous state-of-the-art in CITYU and PKU. ",
"FLOAT SELECTED: Table 6: Results on AS and CITYU compared with previous models in closed test. The asterisks indicate the result of model with unsupervised label from (Wang et al., 2019).",
"FLOAT SELECTED: Table 5: Results on PKU and MSR compared with previous models in closed test. The asterisks indicate the result of model with unsupervised label from (Wang et al., 2019)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"239e38fcea102633f4f6c3a80f00cb858c124ed9",
"999f0b01d8c5de9cccff378ba597b7e03d3cce4f"
],
"answer": [
{
"evidence": [
"Different from scaled dot-product attention, Gaussian-masked directional attention expects to pay attention to the adjacent characters of each positions and cast the localness relationship between characters as a fix Gaussian weight for attention. We assume that the Gaussian weight only relys on the distance between characters."
],
"extractive_spans": [],
"free_form_answer": "pays attentions to adjacent characters and casts a localness relationship between the characters as a fixed Gaussian weight assuming the weight relies on the distance between characters",
"highlighted_evidence": [
"Different from scaled dot-product attention, Gaussian-masked directional attention expects to pay attention to the adjacent characters of each positions and cast the localness relationship between characters as a fix Gaussian weight for attention. We assume that the Gaussian weight only relys on the distance between characters."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Similar as scaled dot-product attention BIBREF24, Gaussian-masked directional attention can be described as a function to map queries and key-value pairs to the representation of input. Here queries, keys and values are all vectors. Standard scaled dot-product attention is calculated by dotting query $Q$ with all keys $K$, dividing each values by $\\sqrt{d_k}$, where $\\sqrt{d_k}$ is the dimension of keys, and apply a softmax function to generate the weights in the attention:",
"Different from scaled dot-product attention, Gaussian-masked directional attention expects to pay attention to the adjacent characters of each positions and cast the localness relationship between characters as a fix Gaussian weight for attention. We assume that the Gaussian weight only relys on the distance between characters."
],
"extractive_spans": [
"Gaussian-masked directional attention can be described as a function to map queries and key-value pairs to the representation of input",
"Gaussian-masked directional attention expects to pay attention to the adjacent characters of each positions and cast the localness relationship between characters as a fix Gaussian weight for attention",
"Gaussian weight only relys on the distance between characters"
],
"free_form_answer": "",
"highlighted_evidence": [
"Similar as scaled dot-product attention BIBREF24, Gaussian-masked directional attention can be described as a function to map queries and key-value pairs to the representation of input. Here queries, keys and values are all vectors. Standard scaled dot-product attention is calculated by dotting query $Q$ with all keys $K$, dividing each values by $\\sqrt{d_k}$, where $\\sqrt{d_k}$ is the dimension of keys, and apply a softmax function to generate the weights in the attention:\n\nDifferent from scaled dot-product attention, Gaussian-masked directional attention expects to pay attention to the adjacent characters of each positions and cast the localness relationship between characters as a fix Gaussian weight for attention. We assume that the Gaussian weight only relys on the distance between characters."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"2c9650fa969b8f7ac6a20e08d535b3d73e2d4b66",
"f5cd7de937d1c9bbb1431cc7c22b8e60893b7624"
],
"answer": [
{
"evidence": [
"External data and pre-trained embedding. Whereas both encoder and graph model are about exploring a way to get better performance only by improving the model strength itself. Using external resource such as pre-trained embeddings or language representation is an alternative for the same purpose BIBREF22, BIBREF23. SIGHAN Bakeoff defines two types of evaluation settings, closed test limits all the data for learning should not be beyond the given training set, while open test does not take this limitation BIBREF21. In this work, we will focus on the closed test setting by finding a better model design for further CWS performance improvement."
],
"extractive_spans": [
"closed test limits all the data for learning should not be beyond the given training set, while open test does not take this limitation"
],
"free_form_answer": "",
"highlighted_evidence": [
"SIGHAN Bakeoff defines two types of evaluation settings, closed test limits all the data for learning should not be beyond the given training set, while open test does not take this limitation BIBREF21."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"External data and pre-trained embedding. Whereas both encoder and graph model are about exploring a way to get better performance only by improving the model strength itself. Using external resource such as pre-trained embeddings or language representation is an alternative for the same purpose BIBREF22, BIBREF23. SIGHAN Bakeoff defines two types of evaluation settings, closed test limits all the data for learning should not be beyond the given training set, while open test does not take this limitation BIBREF21. In this work, we will focus on the closed test setting by finding a better model design for further CWS performance improvement."
],
"extractive_spans": [
"closed test limits all the data for learning should not be beyond the given training set"
],
"free_form_answer": "",
"highlighted_evidence": [
"SIGHAN Bakeoff defines two types of evaluation settings, closed test limits all the data for learning should not be beyond the given training set, while open test does not take this limitation BIBREF21."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"7c302f61f0019d873fb807b08247b8d62eb94788"
],
"answer": [
{
"evidence": [
"Tables TABREF25 and TABREF26 reports the performance of recent models and ours in terms of closed test setting. Without the assistance of unsupervised segmentation features userd in BIBREF20, our model outperforms all the other models in MSR and AS except BIBREF18 and get comparable performance in PKU and CITYU. Note that all the other models for this comparison adopt various $n$-gram features while only our model takes unigram ones.",
"FLOAT SELECTED: Table 5: Results on PKU and MSR compared with previous models in closed test. The asterisks indicate the result of model with unsupervised label from (Wang et al., 2019)."
],
"extractive_spans": [],
"free_form_answer": "Baseline models are:\n- Chen et al., 2015a\n- Chen et al., 2015b\n- Liu et al., 2016\n- Cai and Zhao, 2016\n- Cai et al., 2017\n- Zhou et al., 2017\n- Ma et al., 2018\n- Wang et al., 2019",
"highlighted_evidence": [
"Tables TABREF25 and TABREF26 reports the performance of recent models and ours in terms of closed test setting.",
"FLOAT SELECTED: Table 5: Results on PKU and MSR compared with previous models in closed test. The asterisks indicate the result of model with unsupervised label from (Wang et al., 2019)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"zero",
"zero",
"zero",
"zero"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"How better is performance compared to previous state-of-the-art models?",
"How does Gaussian-masked directional multi-head attention works?",
"What is meant by closed test setting?",
"What are strong baselines model is compared to?"
],
"question_id": [
"5fda8539a97828e188ba26aad5cda1b9dd642bc8",
"709feae853ec0362d4e883db8af41620da0677fe",
"186b7978ee33b563a37139adff1da7d51a60f581",
"fabcd71644bb63559d34b38d78f6ef87c256d475"
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Table 1: The classification of Chinese word segmentation model.",
"Table 2: Feature windows of different models. i(j) is the index of current character(word).",
"Figure 1: The architecture of our model.",
"Figure 2: The structure of Gaussian-Masked directional encoder.",
"Figure 3: The illustration of Gaussian-masked directional multi-head attention and Gaussian-masked directional attention.",
"Table 3: The statistics of SIGHAN Bakeoff 2005 datasets.",
"Table 4: Hyperparameters.",
"Figure 4: An example of bi-affinal scorer labeling the gap. The bi-affinal attention scorer only uses the forward information of front character and the backward information of character to label the gap.",
"Table 5: Results on PKU and MSR compared with previous models in closed test. The asterisks indicate the result of model with unsupervised label from (Wang et al., 2019).",
"Table 6: Results on AS and CITYU compared with previous models in closed test. The asterisks indicate the result of model with unsupervised label from (Wang et al., 2019).",
"Table 7: F1 scores of our results on four datasets in open test compared with previous models."
],
"file": [
"2-Table1-1.png",
"2-Table2-1.png",
"3-Figure1-1.png",
"4-Figure2-1.png",
"5-Figure3-1.png",
"6-Table3-1.png",
"6-Table4-1.png",
"6-Figure4-1.png",
"7-Table5-1.png",
"7-Table6-1.png",
"8-Table7-1.png"
]
} | [
"How better is performance compared to previous state-of-the-art models?",
"How does Gaussian-masked directional multi-head attention works?",
"What are strong baselines model is compared to?"
] | [
[
"1910.14537-7-Table6-1.png",
"1910.14537-Experiments ::: Results-1",
"1910.14537-7-Table5-1.png"
],
[
"1910.14537-Models ::: Gaussian-Masked Directional Multi-Head Attention-0",
"1910.14537-Models ::: Gaussian-Masked Directional Multi-Head Attention-1"
],
[
"1910.14537-7-Table5-1.png",
"1910.14537-Experiments ::: Results-0"
]
] | [
"MSR: 97.7 compared to 97.5 of baseline\nAS: 95.7 compared to 95.6 of baseline",
"pays attentions to adjacent characters and casts a localness relationship between the characters as a fixed Gaussian weight assuming the weight relies on the distance between characters",
"Baseline models are:\n- Chen et al., 2015a\n- Chen et al., 2015b\n- Liu et al., 2016\n- Cai and Zhao, 2016\n- Cai et al., 2017\n- Zhou et al., 2017\n- Ma et al., 2018\n- Wang et al., 2019"
] | 156 |
1808.10245 | Comparative Studies of Detecting Abusive Language on Twitter | The context-dependent nature of online aggression makes annotating large collections of data extremely difficult. Previously studied datasets in abusive language detection have been insufficient in size to efficiently train deep learning models. Recently, Hate and Abusive Speech on Twitter, a dataset much greater in size and reliability, has been released. However, this dataset has not been comprehensively studied to its potential. In this paper, we conduct the first comparative study of various learning models on Hate and Abusive Speech on Twitter, and discuss the possibility of using additional features and context data for improvements. Experimental results show that bidirectional GRU networks trained on word-level features, with Latent Topic Clustering modules, is the most accurate model scoring 0.805 F1. | {
"paragraphs": [
[
"Abusive language refers to any type of insult, vulgarity, or profanity that debases the target; it also can be anything that causes aggravation BIBREF0 , BIBREF1 . Abusive language is often reframed as, but not limited to, offensive language BIBREF2 , cyberbullying BIBREF3 , othering language BIBREF4 , and hate speech BIBREF5 .",
"Recently, an increasing number of users have been subjected to harassment, or have witnessed offensive behaviors online BIBREF6 . Major social media companies (i.e. Facebook, Twitter) have utilized multiple resources—artificial intelligence, human reviewers, user reporting processes, etc.—in effort to censor offensive language, yet it seems nearly impossible to successfully resolve the issue BIBREF7 , BIBREF8 .",
"The major reason of the failure in abusive language detection comes from its subjectivity and context-dependent characteristics BIBREF9 . For instance, a message can be regarded as harmless on its own, but when taking previous threads into account it may be seen as abusive, and vice versa. This aspect makes detecting abusive language extremely laborious even for human annotators; therefore it is difficult to build a large and reliable dataset BIBREF10 .",
"Previously, datasets openly available in abusive language detection research on Twitter ranged from 10K to 35K in size BIBREF9 , BIBREF11 . This quantity is not sufficient to train the significant number of parameters in deep learning models. Due to this reason, these datasets have been mainly studied by traditional machine learning methods. Most recently, Founta et al. founta2018large introduced Hate and Abusive Speech on Twitter, a dataset containing 100K tweets with cross-validated labels. Although this corpus has great potential in training deep models with its significant size, there are no baseline reports to date.",
"This paper investigates the efficacy of different learning models in detecting abusive language. We compare accuracy using the most frequently studied machine learning classifiers as well as recent neural network models. Reliable baseline results are presented with the first comparative study on this dataset. Additionally, we demonstrate the effect of different features and variants, and describe the possibility for further improvements with the use of ensemble models."
],
[
"The research community introduced various approaches on abusive language detection. Razavi et al. razavi2010offensive applied Naïve Bayes, and Warner and Hirschberg warner2012detecting used Support Vector Machine (SVM), both with word-level features to classify offensive language. Xiang et al. xiang2012detecting generated topic distributions with Latent Dirichlet Allocation BIBREF12 , also using word-level features in order to classify offensive tweets.",
"More recently, distributed word representations and neural network models have been widely applied for abusive language detection. Djuric et al. djuric2015hate used the Continuous Bag Of Words model with paragraph2vec algorithm BIBREF13 to more accurately detect hate speech than that of the plain Bag Of Words models. Badjatiya et al. badjatiya2017deep implemented Gradient Boosted Decision Trees classifiers using word representations trained by deep learning models. Other researchers have investigated character-level representations and their effectiveness compared to word-level representations BIBREF14 , BIBREF15 .",
"As traditional machine learning methods have relied on feature engineering, (i.e. n-grams, POS tags, user information) BIBREF1 , researchers have proposed neural-based models with the advent of larger datasets. Convolutional Neural Networks and Recurrent Neural Networks have been applied to detect abusive language, and they have outperformed traditional machine learning classifiers such as Logistic Regression and SVM BIBREF15 , BIBREF16 . However, there are no studies investigating the efficiency of neural models with large-scale datasets over 100K."
],
[
"This section illustrates our implementations on traditional machine learning classifiers and neural network based models in detail. Furthermore, we describe additional features and variant models investigated."
],
[
"We implement five feature engineering based machine learning classifiers that are most often used for abusive language detection. In data preprocessing, text sequences are converted into Bag Of Words (BOW) representations, and normalized with Term Frequency-Inverse Document Frequency (TF-IDF) values. We experiment with word-level features using n-grams ranging from 1 to 3, and character-level features from 3 to 8-grams. Each classifier is implemented with the following specifications:",
"Naïve Bayes (NB): Multinomial NB with additive smoothing constant 1",
"Logistic Regression (LR): Linear LR with L2 regularization constant 1 and limited-memory BFGS optimization",
"Support Vector Machine (SVM): Linear SVM with L2 regularization constant 1 and logistic loss function",
"Random Forests (RF): Averaging probabilistic predictions of 10 randomized decision trees",
"Gradient Boosted Trees (GBT): Tree boosting with learning rate 1 and logistic loss function"
],
[
"Along with traditional machine learning approaches, we investigate neural network based models to evaluate their efficacy within a larger dataset. In particular, we explore Convolutional Neural Networks (CNN), Recurrent Neural Networks (RNN), and their variant models. A pre-trained GloVe BIBREF17 representation is used for word-level features.",
"CNN: We adopt Kim's kim2014convolutional implementation as the baseline. The word-level CNN models have 3 convolutional filters of different sizes [1,2,3] with ReLU activation, and a max-pooling layer. For the character-level CNN, we use 6 convolutional filters of various sizes [3,4,5,6,7,8], then add max-pooling layers followed by 1 fully-connected layer with a dimension of 1024.",
"Park and Fung park2017one proposed a HybridCNN model which outperformed both word-level and character-level CNNs in abusive language detection. In order to evaluate the HybridCNN for this dataset, we concatenate the output of max-pooled layers from word-level and character-level CNN, and feed this vector to a fully-connected layer in order to predict the output.",
"All three CNN models (word-level, character-level, and hybrid) use cross entropy with softmax as their loss function and Adam BIBREF18 as the optimizer.",
"RNN: We use bidirectional RNN BIBREF19 as the baseline, implementing a GRU BIBREF20 cell for each recurrent unit. From extensive parameter-search experiments, we chose 1 encoding layer with 50 dimensional hidden states and an input dropout probability of 0.3. The RNN models use cross entropy with sigmoid as their loss function and Adam as the optimizer.",
"For a possible improvement, we apply a self-matching attention mechanism on RNN baseline models BIBREF21 so that they may better understand the data by retrieving text sequences twice. We also investigate a recently introduced method, Latent Topic Clustering (LTC) BIBREF22 . The LTC method extracts latent topic information from the hidden states of RNN, and uses it for additional information in classifying the text data."
],
[
"While manually analyzing the raw dataset, we noticed that looking at the tweet one has replied to or has quoted, provides significant contextual information. We call these, “context tweets\". As humans can better understand a tweet with the reference of its context, our assumption is that computers also benefit from taking context tweets into account in detecting abusive language.",
"As shown in the examples below, (2) is labeled abusive due to the use of vulgar language. However, the intention of the user can be better understood with its context tweet (1).",
"(1) I hate when I'm sitting in front of the bus and somebody with a wheelchair get on.",
" INLINEFORM0 (2) I hate it when I'm trying to board a bus and there's already an as**ole on it.",
"",
"Similarly, context tweet (3) is important in understanding the abusive tweet (4), especially in identifying the target of the malice.",
"(3) Survivors of #Syria Gas Attack Recount `a Cruel Scene'.",
" INLINEFORM0 (4) Who the HELL is “LIKE\" ING this post? Sick people....",
"",
"Huang et al. huang2016modeling used several attributes of context tweets for sentiment analysis in order to improve the baseline LSTM model. However, their approach was limited because the meta-information they focused on—author information, conversation type, use of the same hashtags or emojis—are all highly dependent on data.",
"In order to avoid data dependency, text sequences of context tweets are directly used as an additional feature of neural network models. We use the same baseline model to convert context tweets to vectors, then concatenate these vectors with outputs of their corresponding labeled tweets. More specifically, we concatenate max-pooled layers of context and labeled tweets for the CNN baseline model. As for RNN, the last hidden states of context and labeled tweets are concatenated."
],
[
" Hate and Abusive Speech on Twitter BIBREF10 classifies tweets into 4 labels, “normal\", “spam\", “hateful\" and “abusive\". We were only able to crawl 70,904 tweets out of 99,996 tweet IDs, mainly because the tweet was deleted or the user account had been suspended. Table shows the distribution of labels of the crawled data."
],
[
"In the data preprocessing steps, user IDs, URLs, and frequently used emojis are replaced as special tokens. Since hashtags tend to have a high correlation with the content of the tweet BIBREF23 , we use a segmentation library BIBREF24 for hashtags to extract more information.",
"For character-level representations, we apply the method Zhang et al. zhang2015character proposed. Tweets are transformed into one-hot encoded vectors using 70 character dimensions—26 lower-cased alphabets, 10 digits, and 34 special characters including whitespace."
],
[
"In training the feature engineering based machine learning classifiers, we truncate vector representations according to the TF-IDF values (the top 14,000 and 53,000 for word-level and character-level representations, respectively) to avoid overfitting. For neural network models, words that appear only once are replaced as unknown tokens.",
"Since the dataset used is not split into train, development, and test sets, we perform 10-fold cross validation, obtaining the average of 5 tries; we divide the dataset randomly by a ratio of 85:5:10, respectively. In order to evaluate the overall performance, we calculate the weighted average of precision, recall, and F1 scores of all four labels, “normal”, “spam”, “hateful”, and “abusive”."
],
[
"As shown in Table , neural network models are more accurate than feature engineering based models (i.e. NB, SVM, etc.) except for the LR model—the best LR model has the same F1 score as the best CNN model.",
"Among traditional machine learning models, the most accurate in classifying abusive language is the LR model followed by ensemble models such as GBT and RF. Character-level representations improve F1 scores of SVM and RF classifiers, but they have no positive effect on other models.",
"For neural network models, RNN with LTC modules have the highest accuracy score, but there are no significant improvements from its baseline model and its attention-added model. Similarly, HybridCNN does not improve the baseline CNN model. For both CNN and RNN models, character-level features significantly decrease the accuracy of classification.",
"The use of context tweets generally have little effect on baseline models, however they noticeably improve the scores of several metrics. For instance, CNN with context tweets score the highest recall and F1 for “hateful\" labels, and RNN models with context tweets have the highest recall for “abusive\" tweets."
],
[
"While character-level features are known to improve the accuracy of neural network models BIBREF16 , they reduce classification accuracy for Hate and Abusive Speech on Twitter. We conclude this is because of the lack of labeled data as well as the significant imbalance among the different labels. Unlike neural network models, character-level features in traditional machine learning classifiers have positive results because we have trained the models only with the most significant character elements using TF-IDF values.",
"Variants of neural network models also suffer from data insufficiency. However, these models show positive performances on “spam\" (14%) and “hateful\" (4%) tweets—the lower distributed labels. The highest F1 score for “spam\" is from the RNN-LTC model (0.551), and the highest for “hateful\" is CNN with context tweets (0.309). Since each variant model excels in different metrics, we expect to see additional improvements with the use of ensemble models of these variants in future works.",
"In this paper, we report the baseline accuracy of different learning models as well as their variants on the recently introduced dataset, Hate and Abusive Speech on Twitter. Experimental results show that bidirectional GRU networks with LTC provide the most accurate results in detecting abusive language. Additionally, we present the possibility of using ensemble models of variant models and features for further improvements."
],
[
"K. Jung is with the Department of Electrical and Computer Engineering, ASRI, Seoul National University, Seoul, Korea. This work was supported by the National Research Foundation of Korea (NRF) funded by the Korea government (MSIT) (No. 2016M3C4A7952632), the Technology Innovation Program (10073144) funded by the Ministry of Trade, Industry & Energy (MOTIE, Korea).",
"We would also like to thank Yongkeun Hwang and Ji Ho Park for helpful discussions and their valuable insights."
]
],
"section_name": [
"Introduction",
"Related Work",
"Methodology",
"Traditional Machine Learning Models",
"Neural Network based Models",
"Feature Extension",
"Dataset",
"Data Preprocessing",
"Training and Evaluation",
"Empirical Results",
"Discussion and Conclusion",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"a7b97c6b02b740afdb26bf02eaa6ce2168aa566e",
"bb2940749e6b1852f793234976203d85e22f5d6c"
],
"answer": [
{
"evidence": [
"Previously, datasets openly available in abusive language detection research on Twitter ranged from 10K to 35K in size BIBREF9 , BIBREF11 . This quantity is not sufficient to train the significant number of parameters in deep learning models. Due to this reason, these datasets have been mainly studied by traditional machine learning methods. Most recently, Founta et al. founta2018large introduced Hate and Abusive Speech on Twitter, a dataset containing 100K tweets with cross-validated labels. Although this corpus has great potential in training deep models with its significant size, there are no baseline reports to date.",
"This paper investigates the efficacy of different learning models in detecting abusive language. We compare accuracy using the most frequently studied machine learning classifiers as well as recent neural network models. Reliable baseline results are presented with the first comparative study on this dataset. Additionally, we demonstrate the effect of different features and variants, and describe the possibility for further improvements with the use of ensemble models."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Previously, datasets openly available in abusive language detection research on Twitter ranged from 10K to 35K in size BIBREF9 , BIBREF11 . This quantity is not sufficient to train the significant number of parameters in deep learning models. Due to this reason, these datasets have been mainly studied by traditional machine learning methods. Most recently, Founta et al. founta2018large introduced Hate and Abusive Speech on Twitter, a dataset containing 100K tweets with cross-validated labels. Although this corpus has great potential in training deep models with its significant size, there are no baseline reports to date.\n\nThis paper investigates the efficacy of different learning models in detecting abusive language. We compare accuracy using the most frequently studied machine learning classifiers as well as recent neural network models. Reliable baseline results are presented with the first comparative study on this dataset. Additionally, we demonstrate the effect of different features and variants, and describe the possibility for further improvements with the use of ensemble models."
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"de04c54cac657e5b8a5ed650a855a2bc2b29bb9d",
"e9529d7bee73ab821db1cc1c7b07e83740a17818"
],
"answer": [
{
"evidence": [
"While manually analyzing the raw dataset, we noticed that looking at the tweet one has replied to or has quoted, provides significant contextual information. We call these, “context tweets\". As humans can better understand a tweet with the reference of its context, our assumption is that computers also benefit from taking context tweets into account in detecting abusive language.",
"As shown in the examples below, (2) is labeled abusive due to the use of vulgar language. However, the intention of the user can be better understood with its context tweet (1).",
"(1) I hate when I'm sitting in front of the bus and somebody with a wheelchair get on.",
"INLINEFORM0 (2) I hate it when I'm trying to board a bus and there's already an as**ole on it.",
"Similarly, context tweet (3) is important in understanding the abusive tweet (4), especially in identifying the target of the malice.",
"(3) Survivors of #Syria Gas Attack Recount `a Cruel Scene'.",
"INLINEFORM0 (4) Who the HELL is “LIKE\" ING this post? Sick people....",
"Huang et al. huang2016modeling used several attributes of context tweets for sentiment analysis in order to improve the baseline LSTM model. However, their approach was limited because the meta-information they focused on—author information, conversation type, use of the same hashtags or emojis—are all highly dependent on data.",
"In order to avoid data dependency, text sequences of context tweets are directly used as an additional feature of neural network models. We use the same baseline model to convert context tweets to vectors, then concatenate these vectors with outputs of their corresponding labeled tweets. More specifically, we concatenate max-pooled layers of context and labeled tweets for the CNN baseline model. As for RNN, the last hidden states of context and labeled tweets are concatenated."
],
"extractive_spans": [],
"free_form_answer": "using tweets that one has replied or quoted to as contextual information",
"highlighted_evidence": [
"While manually analyzing the raw dataset, we noticed that looking at the tweet one has replied to or has quoted, provides significant contextual information. We call these, “context tweets\". As humans can better understand a tweet with the reference of its context, our assumption is that computers also benefit from taking context tweets into account in detecting abusive language.\n\nAs shown in the examples below, (2) is labeled abusive due to the use of vulgar language. However, the intention of the user can be better understood with its context tweet (1).\n\n(1) I hate when I'm sitting in front of the bus and somebody with a wheelchair get on.\n\nINLINEFORM0 (2) I hate it when I'm trying to board a bus and there's already an as**ole on it.\n\nSimilarly, context tweet (3) is important in understanding the abusive tweet (4), especially in identifying the target of the malice.\n\n(3) Survivors of #Syria Gas Attack Recount `a Cruel Scene'.\n\nINLINEFORM0 (4) Who the HELL is “LIKE\" ING this post? Sick people....\n\nHuang et al. huang2016modeling used several attributes of context tweets for sentiment analysis in order to improve the baseline LSTM model. However, their approach was limited because the meta-information they focused on—author information, conversation type, use of the same hashtags or emojis—are all highly dependent on data.\n\nIn order to avoid data dependency, text sequences of context tweets are directly used as an additional feature of neural network models. We use the same baseline model to convert context tweets to vectors, then concatenate these vectors with outputs of their corresponding labeled tweets. More specifically, we concatenate max-pooled layers of context and labeled tweets for the CNN baseline model. As for RNN, the last hidden states of context and labeled tweets are concatenated."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In order to avoid data dependency, text sequences of context tweets are directly used as an additional feature of neural network models. We use the same baseline model to convert context tweets to vectors, then concatenate these vectors with outputs of their corresponding labeled tweets. More specifically, we concatenate max-pooled layers of context and labeled tweets for the CNN baseline model. As for RNN, the last hidden states of context and labeled tweets are concatenated."
],
"extractive_spans": [
"text sequences of context tweets"
],
"free_form_answer": "",
"highlighted_evidence": [
"In order to avoid data dependency, text sequences of context tweets are directly used as an additional feature of neural network models."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"470a78ae3411fb86f9feafacf6b5d092111521c6",
"e6aacde7e633a1c1e25d0f09d0082f458b470cf0"
],
"answer": [
{
"evidence": [
"We implement five feature engineering based machine learning classifiers that are most often used for abusive language detection. In data preprocessing, text sequences are converted into Bag Of Words (BOW) representations, and normalized with Term Frequency-Inverse Document Frequency (TF-IDF) values. We experiment with word-level features using n-grams ranging from 1 to 3, and character-level features from 3 to 8-grams. Each classifier is implemented with the following specifications:",
"Naïve Bayes (NB): Multinomial NB with additive smoothing constant 1",
"Logistic Regression (LR): Linear LR with L2 regularization constant 1 and limited-memory BFGS optimization",
"Support Vector Machine (SVM): Linear SVM with L2 regularization constant 1 and logistic loss function",
"Random Forests (RF): Averaging probabilistic predictions of 10 randomized decision trees",
"Gradient Boosted Trees (GBT): Tree boosting with learning rate 1 and logistic loss function",
"Along with traditional machine learning approaches, we investigate neural network based models to evaluate their efficacy within a larger dataset. In particular, we explore Convolutional Neural Networks (CNN), Recurrent Neural Networks (RNN), and their variant models. A pre-trained GloVe BIBREF17 representation is used for word-level features."
],
"extractive_spans": [
"Naïve Bayes (NB)",
"Logistic Regression (LR)",
"Support Vector Machine (SVM)",
"Random Forests (RF)",
"Gradient Boosted Trees (GBT)",
" Convolutional Neural Networks (CNN)",
"Recurrent Neural Networks (RNN)"
],
"free_form_answer": "",
"highlighted_evidence": [
"Each classifier is implemented with the following specifications:\n\nNaïve Bayes (NB): Multinomial NB with additive smoothing constant 1\n\nLogistic Regression (LR): Linear LR with L2 regularization constant 1 and limited-memory BFGS optimization\n\nSupport Vector Machine (SVM): Linear SVM with L2 regularization constant 1 and logistic loss function\n\nRandom Forests (RF): Averaging probabilistic predictions of 10 randomized decision trees\n\nGradient Boosted Trees (GBT): Tree boosting with learning rate 1 and logistic loss function",
"Along with traditional machine learning approaches, we investigate neural network based models to evaluate their efficacy within a larger dataset. In particular, we explore Convolutional Neural Networks (CNN), Recurrent Neural Networks (RNN), and their variant models."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We implement five feature engineering based machine learning classifiers that are most often used for abusive language detection. In data preprocessing, text sequences are converted into Bag Of Words (BOW) representations, and normalized with Term Frequency-Inverse Document Frequency (TF-IDF) values. We experiment with word-level features using n-grams ranging from 1 to 3, and character-level features from 3 to 8-grams. Each classifier is implemented with the following specifications:",
"Naïve Bayes (NB): Multinomial NB with additive smoothing constant 1",
"Logistic Regression (LR): Linear LR with L2 regularization constant 1 and limited-memory BFGS optimization",
"Support Vector Machine (SVM): Linear SVM with L2 regularization constant 1 and logistic loss function",
"Random Forests (RF): Averaging probabilistic predictions of 10 randomized decision trees",
"Gradient Boosted Trees (GBT): Tree boosting with learning rate 1 and logistic loss function",
"Along with traditional machine learning approaches, we investigate neural network based models to evaluate their efficacy within a larger dataset. In particular, we explore Convolutional Neural Networks (CNN), Recurrent Neural Networks (RNN), and their variant models. A pre-trained GloVe BIBREF17 representation is used for word-level features.",
"CNN: We adopt Kim's kim2014convolutional implementation as the baseline. The word-level CNN models have 3 convolutional filters of different sizes [1,2,3] with ReLU activation, and a max-pooling layer. For the character-level CNN, we use 6 convolutional filters of various sizes [3,4,5,6,7,8], then add max-pooling layers followed by 1 fully-connected layer with a dimension of 1024.",
"Park and Fung park2017one proposed a HybridCNN model which outperformed both word-level and character-level CNNs in abusive language detection. In order to evaluate the HybridCNN for this dataset, we concatenate the output of max-pooled layers from word-level and character-level CNN, and feed this vector to a fully-connected layer in order to predict the output.",
"All three CNN models (word-level, character-level, and hybrid) use cross entropy with softmax as their loss function and Adam BIBREF18 as the optimizer.",
"RNN: We use bidirectional RNN BIBREF19 as the baseline, implementing a GRU BIBREF20 cell for each recurrent unit. From extensive parameter-search experiments, we chose 1 encoding layer with 50 dimensional hidden states and an input dropout probability of 0.3. The RNN models use cross entropy with sigmoid as their loss function and Adam as the optimizer.",
"For a possible improvement, we apply a self-matching attention mechanism on RNN baseline models BIBREF21 so that they may better understand the data by retrieving text sequences twice. We also investigate a recently introduced method, Latent Topic Clustering (LTC) BIBREF22 . The LTC method extracts latent topic information from the hidden states of RNN, and uses it for additional information in classifying the text data."
],
"extractive_spans": [
"Naïve Bayes (NB)",
"Logistic Regression (LR)",
"Support Vector Machine (SVM)",
"Random Forests (RF)",
"Gradient Boosted Trees (GBT)",
"CNN",
"RNN"
],
"free_form_answer": "",
"highlighted_evidence": [
"We implement five feature engineering based machine learning classifiers that are most often used for abusive language detection. In data preprocessing, text sequences are converted into Bag Of Words (BOW) representations, and normalized with Term Frequency-Inverse Document Frequency (TF-IDF) values. We experiment with word-level features using n-grams ranging from 1 to 3, and character-level features from 3 to 8-grams. Each classifier is implemented with the following specifications:\n\nNaïve Bayes (NB): Multinomial NB with additive smoothing constant 1\n\nLogistic Regression (LR): Linear LR with L2 regularization constant 1 and limited-memory BFGS optimization\n\nSupport Vector Machine (SVM): Linear SVM with L2 regularization constant 1 and logistic loss function\n\nRandom Forests (RF): Averaging probabilistic predictions of 10 randomized decision trees\n\nGradient Boosted Trees (GBT): Tree boosting with learning rate 1 and logistic loss function",
"Along with traditional machine learning approaches, we investigate neural network based models to evaluate their efficacy within a larger dataset. In particular, we explore Convolutional Neural Networks (CNN), Recurrent Neural Networks (RNN), and their variant models. A pre-trained GloVe BIBREF17 representation is used for word-level features.\n\nCNN: We adopt Kim's kim2014convolutional implementation as the baseline. The word-level CNN models have 3 convolutional filters of different sizes [1,2,3] with ReLU activation, and a max-pooling layer. For the character-level CNN, we use 6 convolutional filters of various sizes [3,4,5,6,7,8], then add max-pooling layers followed by 1 fully-connected layer with a dimension of 1024.\n\nPark and Fung park2017one proposed a HybridCNN model which outperformed both word-level and character-level CNNs in abusive language detection. In order to evaluate the HybridCNN for this dataset, we concatenate the output of max-pooled layers from word-level and character-level CNN, and feed this vector to a fully-connected layer in order to predict the output.\n\nAll three CNN models (word-level, character-level, and hybrid) use cross entropy with softmax as their loss function and Adam BIBREF18 as the optimizer.\n\nRNN: We use bidirectional RNN BIBREF19 as the baseline, implementing a GRU BIBREF20 cell for each recurrent unit. From extensive parameter-search experiments, we chose 1 encoding layer with 50 dimensional hidden states and an input dropout probability of 0.3. The RNN models use cross entropy with sigmoid as their loss function and Adam as the optimizer.\n\nFor a possible improvement, we apply a self-matching attention mechanism on RNN baseline models BIBREF21 so that they may better understand the data by retrieving text sequences twice. We also investigate a recently introduced method, Latent Topic Clustering (LTC) BIBREF22 . The LTC method extracts latent topic information from the hidden states of RNN, and uses it for additional information in classifying the text data."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"2a5168cd51e79d51afc28ae6ca396d227d8089b7"
],
"answer": [
{
"evidence": [
"The major reason of the failure in abusive language detection comes from its subjectivity and context-dependent characteristics BIBREF9 . For instance, a message can be regarded as harmless on its own, but when taking previous threads into account it may be seen as abusive, and vice versa. This aspect makes detecting abusive language extremely laborious even for human annotators; therefore it is difficult to build a large and reliable dataset BIBREF10 ."
],
"extractive_spans": [
"detecting abusive language extremely laborious",
"it is difficult to build a large and reliable dataset"
],
"free_form_answer": "",
"highlighted_evidence": [
"For instance, a message can be regarded as harmless on its own, but when taking previous threads into account it may be seen as abusive, and vice versa. This aspect makes detecting abusive language extremely laborious even for human annotators; therefore it is difficult to build a large and reliable dataset BIBREF10 ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"Does the dataset feature only English language data?",
"What additional features and context are proposed?",
"What learning models are used on the dataset?",
"What examples of the difficulties presented by the context-dependent nature of online aggression do they authors give?"
],
"question_id": [
"da9c0637623885afaf023a319beee87898948fe9",
"8a1c0ef69b6022a0642ca131a8eacb5c97016640",
"48088a842f7a433d3290eb45eb0d4c6ab1d8f13c",
"4907096cf16d506937e592c50ae63b642da49052"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"search_query": [
"twitter",
"twitter",
"twitter",
"twitter"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Table 1: Label distribution of crawled tweets",
"Table 2: Experimental results of learning models and their variants, followed by the context tweet models. The top 2 scores are marked as bold for each metric."
],
"file": [
"3-Table1-1.png",
"4-Table2-1.png"
]
} | [
"What additional features and context are proposed?"
] | [
[
"1808.10245-Feature Extension-1",
"1808.10245-Feature Extension-2",
"1808.10245-Feature Extension-6",
"1808.10245-Feature Extension-9",
"1808.10245-Feature Extension-10",
"1808.10245-Feature Extension-0",
"1808.10245-Feature Extension-5"
]
] | [
"using tweets that one has replied or quoted to as contextual information"
] | 157 |
1910.12574 | A BERT-Based Transfer Learning Approach for Hate Speech Detection in Online Social Media | Generated hateful and toxic content by a portion of users in social media is a rising phenomenon that motivated researchers to dedicate substantial efforts to the challenging direction of hateful content identification. We not only need an efficient automatic hate speech detection model based on advanced machine learning and natural language processing, but also a sufficiently large amount of annotated data to train a model. The lack of a sufficient amount of labelled hate speech data, along with the existing biases, has been the main issue in this domain of research. To address these needs, in this study we introduce a novel transfer learning approach based on an existing pre-trained language model called BERT (Bidirectional Encoder Representations from Transformers). More specifically, we investigate the ability of BERT at capturing hateful context within social media content by using new fine-tuning methods based on transfer learning. To evaluate our proposed approach, we use two publicly available datasets that have been annotated for racism, sexism, hate, or offensive content on Twitter. The results show that our solution obtains considerable performance on these datasets in terms of precision and recall in comparison to existing approaches. Consequently, our model can capture some biases in data annotation and collection process and can potentially lead us to a more accurate model. | {
"paragraphs": [
[
"",
"People are increasingly using social networking platforms such as Twitter, Facebook, YouTube, etc. to communicate their opinions and share information. Although the interactions among users on these platforms can lead to constructive conversations, they have been increasingly exploited for the propagation of abusive language and the organization of hate-based activities BIBREF0, BIBREF1, especially due to the mobility and anonymous environment of these online platforms. Violence attributed to online hate speech has increased worldwide. For example, in the UK, there has been a significant increase in hate speech towards the immigrant and Muslim communities following the UK's leaving the EU and the Manchester and London attacks. The US also has been a marked increase in hate speech and related crime following the Trump election. Therefore, governments and social network platforms confronting the trend must have tools to detect aggressive behavior in general, and hate speech in particular, as these forms of online aggression not only poison the social climate of the online communities that experience it, but can also provoke physical violence and serious harm BIBREF1.",
"Recently, the problem of online abusive detection has attracted scientific attention. Proof of this is the creation of the third Workshop on Abusive Language Online or Kaggle’s Toxic Comment Classification Challenge that gathered 4,551 teams in 2018 to detect different types of toxicities (threats, obscenity, etc.). In the scope of this work, we mainly focus on the term hate speech as abusive content in social media, since it can be considered a broad umbrella term for numerous kinds of insulting user-generated content. Hate speech is commonly defined as any communication criticizing a person or a group based on some characteristics such as gender, sexual orientation, nationality, religion, race, etc. Hate speech detection is not a stable or simple target because misclassification of regular conversation as hate speech can severely affect users’ freedom of expression and reputation, while misclassification of hateful conversations as unproblematic would maintain the status of online communities as unsafe environments BIBREF2.",
"To detect online hate speech, a large number of scientific studies have been dedicated by using Natural Language Processing (NLP) in combination with Machine Learning (ML) and Deep Learning (DL) methods BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF0. Although supervised machine learning-based approaches have used different text mining-based features such as surface features, sentiment analysis, lexical resources, linguistic features, knowledge-based features or user-based and platform-based metadata BIBREF8, BIBREF9, BIBREF10, they necessitate a well-defined feature extraction approach. The trend now seems to be changing direction, with deep learning models being used for both feature extraction and the training of classifiers. These newer models are applying deep learning approaches such as Convolutional Neural Networks (CNNs), Long Short-Term Memory Networks (LSTMs), etc.BIBREF6, BIBREF0 to enhance the performance of hate speech detection models, however, they still suffer from lack of labelled data or inability to improve generalization property.",
"Here, we propose a transfer learning approach for hate speech understanding using a combination of the unsupervised pre-trained model BERT BIBREF11 and some new supervised fine-tuning strategies. As far as we know, it is the first time that such exhaustive fine-tuning strategies are proposed along with a generative pre-trained language model to transfer learning to low-resource hate speech languages and improve performance of the task. In summary:",
"We propose a transfer learning approach using the pre-trained language model BERT learned on English Wikipedia and BookCorpus to enhance hate speech detection on publicly available benchmark datasets. Toward that end, for the first time, we introduce new fine-tuning strategies to examine the effect of different embedding layers of BERT in hate speech detection.",
"Our experiment results show that using the pre-trained BERT model and fine-tuning it on the downstream task by leveraging syntactical and contextual information of all BERT's transformers outperforms previous works in terms of precision, recall, and F1-score. Furthermore, examining the results shows the ability of our model to detect some biases in the process of collecting or annotating datasets. It can be a valuable clue in using pre-trained BERT model for debiasing hate speech datasets in future studies."
],
[
"Here, the existing body of knowledge on online hate speech and offensive language and transfer learning is presented.",
"Online Hate Speech and Offensive Language: Researchers have been studying hate speech on social media platforms such as Twitter BIBREF9, Reddit BIBREF12, BIBREF13, and YouTube BIBREF14 in the past few years. The features used in traditional machine learning approaches are the main aspects distinguishing different methods, and surface-level features such as bag of words, word-level and character-level $n$-grams, etc. have proven to be the most predictive features BIBREF3, BIBREF4, BIBREF5. Apart from features, different algorithms such as Support Vector Machines BIBREF15, Naive Baye BIBREF1, and Logistic Regression BIBREF5, BIBREF9, etc. have been applied for classification purposes. Waseem et al. BIBREF5 provided a test with a list of criteria based on the work in Gender Studies and Critical Race Theory (CRT) that can annotate a corpus of more than $16k$ tweets as racism, sexism, or neither. To classify tweets, they used a logistic regression model with different sets of features, such as word and character $n$-grams up to 4, gender, length, and location. They found that their best model produces character $n$-gram as the most indicative features, and using location or length is detrimental. Davidson et al. BIBREF9 collected a $24K$ corpus of tweets containing hate speech keywords and labelled the corpus as hate speech, offensive language, or neither by using crowd-sourcing and extracted different features such as $n$-grams, some tweet-level metadata such as the number of hashtags, mentions, retweets, and URLs, Part Of Speech (POS) tagging, etc. Their experiments on different multi-class classifiers showed that the Logistic Regression with L2 regularization performs the best at this task. Malmasi et al. BIBREF15 proposed an ensemble-based system that uses some linear SVM classifiers in parallel to distinguish hate speech from general profanity in social media.",
"As one of the first attempts in neural network models, Djuric et al. BIBREF16 proposed a two-step method including a continuous bag of words model to extract paragraph2vec embeddings and a binary classifier trained along with the embeddings to distinguish between hate speech and clean content. Badjatiya et al. BIBREF0 investigated three deep learning architectures, FastText, CNN, and LSTM, in which they initialized the word embeddings with either random or GloVe embeddings. Gambäck et al. BIBREF6 proposed a hate speech classifier based on CNN model trained on different feature embeddings such as word embeddings and character $n$-grams. Zhang et al. BIBREF7 used a CNN+GRU (Gated Recurrent Unit network) neural network model initialized with pre-trained word2vec embeddings to capture both word/character combinations (e. g., $n$-grams, phrases) and word/character dependencies (order information). Waseem et al. BIBREF10 brought a new insight to hate speech and abusive language detection tasks by proposing a multi-task learning framework to deal with datasets across different annotation schemes, labels, or geographic and cultural influences from data sampling. Founta et al. BIBREF17 built a unified classification model that can efficiently handle different types of abusive language such as cyberbullying, hate, sarcasm, etc. using raw text and domain-specific metadata from Twitter. Furthermore, researchers have recently focused on the bias derived from the hate speech training datasets BIBREF18, BIBREF2, BIBREF19. Davidson et al. BIBREF2 showed that there were systematic and substantial racial biases in five benchmark Twitter datasets annotated for offensive language detection. Wiegand et al. BIBREF19 also found that classifiers trained on datasets containing more implicit abuse (tweets with some abusive words) are more affected by biases rather than once trained on datasets with a high proportion of explicit abuse samples (tweets containing sarcasm, jokes, etc.).",
"Transfer Learning: Pre-trained vector representations of words, embeddings, extracted from vast amounts of text data have been encountered in almost every language-based tasks with promising results. Two of the most frequently used context-independent neural embeddings are word2vec and Glove extracted from shallow neural networks. The year 2018 has been an inflection point for different NLP tasks thanks to remarkable breakthroughs: Universal Language Model Fine-Tuning (ULMFiT) BIBREF20, Embedding from Language Models (ELMO) BIBREF21, OpenAI’ s Generative Pre-trained Transformer (GPT) BIBREF22, and Google’s BERT model BIBREF11. Howard et al. BIBREF20 proposed ULMFiT which can be applied to any NLP task by pre-training a universal language model on a general-domain corpus and then fine-tuning the model on target task data using discriminative fine-tuning. Peters et al. BIBREF21 used a bi-directional LSTM trained on a specific task to present context-sensitive representations of words in word embeddings by looking at the entire sentence. Radford et al. BIBREF22 and Devlin et al. BIBREF11 generated two transformer-based language models, OpenAI GPT and BERT respectively. OpenAI GPT BIBREF22 is an unidirectional language model while BERT BIBREF11 is the first deeply bidirectional, unsupervised language representation, pre-trained using only a plain text corpus. BERT has two novel prediction tasks: Masked LM and Next Sentence Prediction. The pre-trained BERT model significantly outperformed ELMo and OpenAI GPT in a series of downstream tasks in NLP BIBREF11. Identifying hate speech and offensive language is a complicated task due to the lack of undisputed labelled data BIBREF15 and the inability of surface features to capture the subtle semantics in text. To address this issue, we use the pre-trained language model BERT for hate speech classification and try to fine-tune specific task by leveraging information from different transformer encoders."
],
[
"Here, we analyze the BERT transformer model on the hate speech detection task. BERT is a multi-layer bidirectional transformer encoder trained on the English Wikipedia and the Book Corpus containing 2,500M and 800M tokens, respectively, and has two models named BERTbase and BERTlarge. BERTbase contains an encoder with 12 layers (transformer blocks), 12 self-attention heads, and 110 million parameters whereas BERTlarge has 24 layers, 16 attention heads, and 340 million parameters. Extracted embeddings from BERTbase have 768 hidden dimensions BIBREF11. As the BERT model is pre-trained on general corpora, and for our hate speech detection task we are dealing with social media content, therefore as a crucial step, we have to analyze the contextual information extracted from BERT' s pre-trained layers and then fine-tune it using annotated datasets. By fine-tuning we update weights using a labelled dataset that is new to an already trained model. As an input and output, BERT takes a sequence of tokens in maximum length 512 and produces a representation of the sequence in a 768-dimensional vector. BERT inserts at most two segments to each input sequence, [CLS] and [SEP]. [CLS] embedding is the first token of the input sequence and contains the special classification embedding which we take the first token [CLS] in the final hidden layer as the representation of the whole sequence in hate speech classification task. The [SEP] separates segments and we will not use it in our classification task. To perform the hate speech detection task, we use BERTbase model to classify each tweet as Racism, Sexism, Neither or Hate, Offensive, Neither in our datasets. In order to do that, we focus on fine-tuning the pre-trained BERTbase parameters. By fine-tuning, we mean training a classifier with different layers of 768 dimensions on top of the pre-trained BERTbase transformer to minimize task-specific parameters."
],
[
"Different layers of a neural network can capture different levels of syntactic and semantic information. The lower layer of the BERT model may contain more general information whereas the higher layers contain task-specific information BIBREF11, and we can fine-tune them with different learning rates. Here, four different fine-tuning approaches are implemented that exploit pre-trained BERTbase transformer encoders for our classification task. More information about these transformer encoders' architectures are presented in BIBREF11. In the fine-tuning phase, the model is initialized with the pre-trained parameters and then are fine-tuned using the labelled datasets. Different fine-tuning approaches on the hate speech detection task are depicted in Figure FIGREF8, in which $X_{i}$ is the vector representation of token $i$ in a tweet sample, and are explained in more detail as follows:",
"1. BERT based fine-tuning: In the first approach, which is shown in Figure FIGREF8, very few changes are applied to the BERTbase. In this architecture, only the [CLS] token output provided by BERT is used. The [CLS] output, which is equivalent to the [CLS] token output of the 12th transformer encoder, a vector of size 768, is given as input to a fully connected network without hidden layer. The softmax activation function is applied to the hidden layer to classify.",
"2. Insert nonlinear layers: Here, the first architecture is upgraded and an architecture with a more robust classifier is provided in which instead of using a fully connected network without hidden layer, a fully connected network with two hidden layers in size 768 is used. The first two layers use the Leaky Relu activation function with negative slope = 0.01, but the final layer, as the first architecture, uses softmax activation function as shown in Figure FIGREF8.",
"3. Insert Bi-LSTM layer: Unlike previous architectures that only use [CLS] as the input for the classifier, in this architecture all outputs of the latest transformer encoder are used in such a way that they are given as inputs to a bidirectional recurrent neural network (Bi-LSTM) as shown in Figure FIGREF8. After processing the input, the network sends the final hidden state to a fully connected network that performs classification using the softmax activation function.",
"4. Insert CNN layer: In this architecture shown in Figure FIGREF8, the outputs of all transformer encoders are used instead of using the output of the latest transformer encoder. So that the output vectors of each transformer encoder are concatenated, and a matrix is produced. The convolutional operation is performed with a window of size (3, hidden size of BERT which is 768 in BERTbase model) and the maximum value is generated for each transformer encoder by applying max pooling on the convolution output. By concatenating these values, a vector is generated which is given as input to a fully connected network. By applying softmax on the input, the classification operation is performed."
],
[
"We first introduce datasets used in our study and then investigate the different fine-tuning strategies for hate speech detection task. We also include the details of our implementation and error analysis in the respective subsections."
],
[
"We evaluate our method on two widely-studied datasets provided by Waseem and Hovey BIBREF5 and Davidson et al. BIBREF9. Waseem and Hovy BIBREF5 collected $16k$ of tweets based on an initial ad-hoc approach that searched common slurs and terms related to religious, sexual, gender, and ethnic minorities. They annotated their dataset manually as racism, sexism, or neither. To extend this dataset, Waseem BIBREF23 also provided another dataset containing $6.9k$ of tweets annotated with both expert and crowdsourcing users as racism, sexism, neither, or both. Since both datasets are overlapped partially and they used the same strategy in definition of hateful content, we merged these two datasets following Waseem et al. BIBREF10 to make our imbalance data a bit larger. Davidson et al. BIBREF9 used the Twitter API to accumulate 84.4 million tweets from 33,458 twitter users containing particular terms from a pre-defined lexicon of hate speech words and phrases, called Hatebased.org. To annotate collected tweets as Hate, Offensive, or Neither, they randomly sampled $25k$ tweets and asked users of CrowdFlower crowdsourcing platform to label them. In detail, the distribution of different classes in both datasets will be provided in Subsection SECREF15."
],
[
"We find mentions of users, numbers, hashtags, URLs and common emoticons and replace them with the tokens <user>,<number>,<hashtag>,<url>,<emoticon>. We also find elongated words and convert them into short and standard format; for example, converting yeeeessss to yes. With hashtags that include some tokens without any with space between them, we replace them by their textual counterparts; for example, we convert hashtag “#notsexist\" to “not sexist\". All punctuation marks, unknown uni-codes and extra delimiting characters are removed, but we keep all stop words because our model trains the sequence of words in a text directly. We also convert all tweets to lower case."
],
[
"For the implementation of our neural network, we used pytorch-pretrained-bert library containing the pre-trained BERT model, text tokenizer, and pre-trained WordPiece. As the implementation environment, we use Google Colaboratory tool which is a free research tool with a Tesla K80 GPU and 12G RAM. Based on our experiments, we trained our classifier with a batch size of 32 for 3 epochs. The dropout probability is set to 0.1 for all layers. Adam optimizer is used with a learning rate of 2e-5. As an input, we tokenized each tweet with the BERT tokenizer. It contains invalid characters removal, punctuation splitting, and lowercasing the words. Based on the original BERT BIBREF11, we split words to subword units using WordPiece tokenization. As tweets are short texts, we set the maximum sequence length to 64 and in any shorter or longer length case it will be padded with zero values or truncated to the maximum length.",
"We consider 80% of each dataset as training data to update the weights in the fine-tuning phase, 10% as validation data to measure the out-of-sample performance of the model during training, and 10% as test data to measure the out-of-sample performance after training. To prevent overfitting, we use stratified sampling to select 0.8, 0.1, and 0.1 portions of tweets from each class (racism/sexism/neither or hate/offensive/neither) for train, validation, and test. Classes' distribution of train, validation, and test datasets are shown in Table TABREF16.",
"As it is understandable from Tables TABREF16(classdistributionwaseem) and TABREF16(classdistributiondavidson), we are dealing with imbalance datasets with various classes’ distribution. Since hate speech and offensive languages are real phenomena, we did not perform oversampling or undersampling techniques to adjust the classes’ distribution and tried to supply the datasets as realistic as possible. We evaluate the effect of different fine-tuning strategies on the performance of our model. Table TABREF17 summarized the obtained results for fine-tuning strategies along with the official baselines. We use Waseem and Hovy BIBREF5, Davidson et al. BIBREF9, and Waseem et al. BIBREF10 as baselines and compare the results with our different fine-tuning strategies using pre-trained BERTbase model. The evaluation results are reported on the test dataset and on three different metrics: precision, recall, and weighted-average F1-score. We consider weighted-average F1-score as the most robust metric versus class imbalance, which gives insight into the performance of our proposed models. According to Table TABREF17, F1-scores of all BERT based fine-tuning strategies except BERT + nonlinear classifier on top of BERT are higher than the baselines. Using the pre-trained BERT model as initial embeddings and fine-tuning the model with a fully connected linear classifier (BERTbase) outperforms previous baselines yielding F1-score of 81% and 91% for datasets of Waseem and Davidson respectively. Inserting a CNN to pre-trained BERT model for fine-tuning on downstream task provides the best results as F1- score of 88% and 92% for datasets of Waseem and Davidson and it clearly exceeds the baselines. Intuitively, this makes sense that combining all pre-trained BERT layers with a CNN yields better results in which our model uses all the information included in different layers of pre-trained BERT during the fine-tuning phase. This information contains both syntactical and contextual features coming from lower layers to higher layers of BERT."
],
[
"Although we have very interesting results in term of recall, the precision of the model shows the portion of false detection we have. To understand better this phenomenon, in this section we perform a deep analysis on the error of the model. We investigate the test datasets and their confusion matrices resulted from the BERTbase + CNN model as the best fine-tuning approach; depicted in Figures FIGREF19 and FIGREF19. According to Figure FIGREF19 for Waseem-dataset, it is obvious that the model can separate sexism from racism content properly. Only two samples belonging to racism class are misclassified as sexism and none of the sexism samples are misclassified as racism. A large majority of the errors come from misclassifying hateful categories (racism and sexism) as hatless (neither) and vice versa. 0.9% and 18.5% of all racism samples are misclassified as sexism and neither respectively whereas it is 0% and 12.7% for sexism samples. Almost 12% of neither samples are misclassified as racism or sexism. As Figure FIGREF19 makes clear for Davidson-dataset, the majority of errors are related to hate class where the model misclassified hate content as offensive in 63% of the cases. However, 2.6% and 7.9% of offensive and neither samples are misclassified respectively.",
"To understand better the mislabeled items by our model, we did a manual inspection on a subset of the data and record some of them in Tables TABREF20 and TABREF21. Considering the words such as “daughters\", “women\", and “burka\" in tweets with IDs 1 and 2 in Table TABREF20, it can be understood that our BERT based classifier is confused with the contextual semantic between these words in the samples and misclassified them as sexism because they are mainly associated to femininity. In some cases containing implicit abuse (like subtle insults) such as tweets with IDs 5 and 7, our model cannot capture the hateful/offensive content and therefore misclassifies. It should be noticed that even for a human it is difficult to discriminate against this kind of implicit abuses.",
"By examining more samples and with respect to recently studies BIBREF2, BIBREF24, BIBREF19, it is clear that many errors are due to biases from data collection BIBREF19 and rules of annotation BIBREF24 and not the classifier itself. Since Waseem et al.BIBREF5 created a small ad-hoc set of keywords and Davidson et al.BIBREF9 used a large crowdsourced dictionary of keywords (Hatebase lexicon) to sample tweets for training, they included some biases in the collected data. Especially for Davidson-dataset, some tweets with specific language (written within the African American Vernacular English) and geographic restriction (United States of America) are oversampled such as tweets containing disparage words “nigga\", “faggot\", “coon\", or “queer\", result in high rates of misclassification. However, these misclassifications do not confirm the low performance of our classifier because annotators tended to annotate many samples containing disrespectful words as hate or offensive without any presumption about the social context of tweeters such as the speaker’s identity or dialect, whereas they were just offensive or even neither tweets. Tweets IDs 6, 8, and 10 are some samples containing offensive words and slurs which arenot hate or offensive in all cases and writers of them used this type of language in their daily communications. Given these pieces of evidence, by considering the content of tweets, we can see in tweets IDs 3, 4, and 9 that our BERT-based classifier can discriminate tweets in which neither and implicit hatred content exist. One explanation of this observation may be the pre-trained general knowledge that exists in our model. Since the pre-trained BERT model is trained on general corpora, it has learned general knowledge from normal textual data without any purposely hateful or offensive language. Therefore, despite the bias in the data, our model can differentiate hate and offensive samples accurately by leveraging knowledge-aware language understanding that it has and it can be the main reason for high misclassifications of hate samples as offensive (in reality they are more similar to offensive rather than hate by considering social context, geolocation, and dialect of tweeters)."
],
[
"Conflating hatred content with offensive or harmless language causes online automatic hate speech detection tools to flag user-generated content incorrectly. Not addressing this problem may bring about severe negative consequences for both platforms and users such as decreasement of platforms' reputation or users abandonment. Here, we propose a transfer learning approach advantaging the pre-trained language model BERT to enhance the performance of a hate speech detection system and to generalize it to new datasets. To that end, we introduce new fine-tuning strategies to examine the effect of different layers of BERT in hate speech detection task. The evaluation results indicate that our model outperforms previous works by profiting the syntactical and contextual information embedded in different transformer encoder layers of the BERT model using a CNN-based fine-tuning strategy. Furthermore, examining the results shows the ability of our model to detect some biases in the process of collecting or annotating datasets. It can be a valuable clue in using the pre-trained BERT model to alleviate bias in hate speech datasets in future studies, by investigating a mixture of contextual information embedded in the BERT’s layers and a set of features associated to the different type of biases in data.",
""
]
],
"section_name": [
"Introduction",
"Previous Works",
"Methodology",
"Methodology ::: Fine-Tuning Strategies",
"Experiments and Results",
"Experiments and Results ::: Dataset Description",
"Experiments and Results ::: Pre-Processing",
"Experiments and Results ::: Implementation and Results Analysis",
"Experiments and Results ::: Error Analysis",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"344c214235a6ad9fb7756a32507dc36f1acbc061",
"a7a8755bfc8e881b83a89e97bc1a0caed64996c5"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [
"We evaluate our method on two widely-studied datasets provided by Waseem and Hovey BIBREF5 and Davidson et al. BIBREF9. Waseem and Hovy BIBREF5 collected $16k$ of tweets based on an initial ad-hoc approach that searched common slurs and terms related to religious, sexual, gender, and ethnic minorities. They annotated their dataset manually as racism, sexism, or neither. To extend this dataset, Waseem BIBREF23 also provided another dataset containing $6.9k$ of tweets annotated with both expert and crowdsourcing users as racism, sexism, neither, or both. Since both datasets are overlapped partially and they used the same strategy in definition of hateful content, we merged these two datasets following Waseem et al. BIBREF10 to make our imbalance data a bit larger. Davidson et al. BIBREF9 used the Twitter API to accumulate 84.4 million tweets from 33,458 twitter users containing particular terms from a pre-defined lexicon of hate speech words and phrases, called Hatebased.org. To annotate collected tweets as Hate, Offensive, or Neither, they randomly sampled $25k$ tweets and asked users of CrowdFlower crowdsourcing platform to label them. In detail, the distribution of different classes in both datasets will be provided in Subsection SECREF15."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"We evaluate our method on two widely-studied datasets provided by Waseem and Hovey BIBREF5 and Davidson et al. BIBREF9. Waseem and Hovy BIBREF5 collected $16k$ of tweets based on an initial ad-hoc approach that searched common slurs and terms related to religious, sexual, gender, and ethnic minorities. They annotated their dataset manually as racism, sexism, or neither. To extend this dataset, Waseem BIBREF23 also provided another dataset containing $6.9k$ of tweets annotated with both expert and crowdsourcing users as racism, sexism, neither, or both. Since both datasets are overlapped partially and they used the same strategy in definition of hateful content, we merged these two datasets following Waseem et al. BIBREF10 to make our imbalance data a bit larger. Davidson et al. BIBREF9 used the Twitter API to accumulate 84.4 million tweets from 33,458 twitter users containing particular terms from a pre-defined lexicon of hate speech words and phrases, called Hatebased.org. To annotate collected tweets as Hate, Offensive, or Neither, they randomly sampled $25k$ tweets and asked users of CrowdFlower crowdsourcing platform to label them. In detail, the distribution of different classes in both datasets will be provided in Subsection SECREF15."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"f7b6950b0704cc04f1832844a08881daa8c4185b"
],
"answer": [
{
"evidence": [
"By examining more samples and with respect to recently studies BIBREF2, BIBREF24, BIBREF19, it is clear that many errors are due to biases from data collection BIBREF19 and rules of annotation BIBREF24 and not the classifier itself. Since Waseem et al.BIBREF5 created a small ad-hoc set of keywords and Davidson et al.BIBREF9 used a large crowdsourced dictionary of keywords (Hatebase lexicon) to sample tweets for training, they included some biases in the collected data. Especially for Davidson-dataset, some tweets with specific language (written within the African American Vernacular English) and geographic restriction (United States of America) are oversampled such as tweets containing disparage words “nigga\", “faggot\", “coon\", or “queer\", result in high rates of misclassification. However, these misclassifications do not confirm the low performance of our classifier because annotators tended to annotate many samples containing disrespectful words as hate or offensive without any presumption about the social context of tweeters such as the speaker’s identity or dialect, whereas they were just offensive or even neither tweets. Tweets IDs 6, 8, and 10 are some samples containing offensive words and slurs which arenot hate or offensive in all cases and writers of them used this type of language in their daily communications. Given these pieces of evidence, by considering the content of tweets, we can see in tweets IDs 3, 4, and 9 that our BERT-based classifier can discriminate tweets in which neither and implicit hatred content exist. One explanation of this observation may be the pre-trained general knowledge that exists in our model. Since the pre-trained BERT model is trained on general corpora, it has learned general knowledge from normal textual data without any purposely hateful or offensive language. Therefore, despite the bias in the data, our model can differentiate hate and offensive samples accurately by leveraging knowledge-aware language understanding that it has and it can be the main reason for high misclassifications of hate samples as offensive (in reality they are more similar to offensive rather than hate by considering social context, geolocation, and dialect of tweeters)."
],
"extractive_spans": [],
"free_form_answer": "The authors showed few tweets where neither and implicit hatred content exist but the model was able to discriminate",
"highlighted_evidence": [
"Tweets IDs 6, 8, and 10 are some samples containing offensive words and slurs which arenot hate or offensive in all cases and writers of them used this type of language in their daily communications. Given these pieces of evidence, by considering the content of tweets, we can see in tweets IDs 3, 4, and 9 that our BERT-based classifier can discriminate tweets in which neither and implicit hatred content exist."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"3778ad2b8449a0cf65be9140537230e4bd9bc9f8",
"dffe3547623e4c303f473e18fbb4dccdac1fa3a5"
],
"answer": [
{
"evidence": [
"We evaluate our method on two widely-studied datasets provided by Waseem and Hovey BIBREF5 and Davidson et al. BIBREF9. Waseem and Hovy BIBREF5 collected $16k$ of tweets based on an initial ad-hoc approach that searched common slurs and terms related to religious, sexual, gender, and ethnic minorities. They annotated their dataset manually as racism, sexism, or neither. To extend this dataset, Waseem BIBREF23 also provided another dataset containing $6.9k$ of tweets annotated with both expert and crowdsourcing users as racism, sexism, neither, or both. Since both datasets are overlapped partially and they used the same strategy in definition of hateful content, we merged these two datasets following Waseem et al. BIBREF10 to make our imbalance data a bit larger. Davidson et al. BIBREF9 used the Twitter API to accumulate 84.4 million tweets from 33,458 twitter users containing particular terms from a pre-defined lexicon of hate speech words and phrases, called Hatebased.org. To annotate collected tweets as Hate, Offensive, or Neither, they randomly sampled $25k$ tweets and asked users of CrowdFlower crowdsourcing platform to label them. In detail, the distribution of different classes in both datasets will be provided in Subsection SECREF15."
],
"extractive_spans": [
"Waseem-dataset",
"Davidson-dataset,"
],
"free_form_answer": "",
"highlighted_evidence": [
"We evaluate our method on two widely-studied datasets provided by Waseem and Hovey BIBREF5 and Davidson et al. BIBREF9."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We evaluate our method on two widely-studied datasets provided by Waseem and Hovey BIBREF5 and Davidson et al. BIBREF9. Waseem and Hovy BIBREF5 collected $16k$ of tweets based on an initial ad-hoc approach that searched common slurs and terms related to religious, sexual, gender, and ethnic minorities. They annotated their dataset manually as racism, sexism, or neither. To extend this dataset, Waseem BIBREF23 also provided another dataset containing $6.9k$ of tweets annotated with both expert and crowdsourcing users as racism, sexism, neither, or both. Since both datasets are overlapped partially and they used the same strategy in definition of hateful content, we merged these two datasets following Waseem et al. BIBREF10 to make our imbalance data a bit larger. Davidson et al. BIBREF9 used the Twitter API to accumulate 84.4 million tweets from 33,458 twitter users containing particular terms from a pre-defined lexicon of hate speech words and phrases, called Hatebased.org. To annotate collected tweets as Hate, Offensive, or Neither, they randomly sampled $25k$ tweets and asked users of CrowdFlower crowdsourcing platform to label them. In detail, the distribution of different classes in both datasets will be provided in Subsection SECREF15."
],
"extractive_spans": [
"Waseem and Hovey BIBREF5",
"Davidson et al. BIBREF9"
],
"free_form_answer": "",
"highlighted_evidence": [
"We evaluate our method on two widely-studied datasets provided by Waseem and Hovey BIBREF5 and Davidson et al. BIBREF9. Waseem and Hovy BIBREF5 collected $16k$ of tweets based on an initial ad-hoc approach that searched common slurs and terms related to religious, sexual, gender, and ethnic minorities. They annotated their dataset manually as racism, sexism, or neither. To extend this dataset, Waseem BIBREF23 also provided another dataset containing $6.9k$ of tweets annotated with both expert and crowdsourcing users as racism, sexism, neither, or both. Since both datasets are overlapped partially and they used the same strategy in definition of hateful content, we merged these two datasets following Waseem et al. BIBREF10 to make our imbalance data a bit larger. Davidson et al. BIBREF9 used the Twitter API to accumulate 84.4 million tweets from 33,458 twitter users containing particular terms from a pre-defined lexicon of hate speech words and phrases, called Hatebased.org. To annotate collected tweets as Hate, Offensive, or Neither, they randomly sampled $25k$ tweets and asked users of CrowdFlower crowdsourcing platform to label them. In detail, the distribution of different classes in both datasets will be provided in Subsection SECREF15."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"59d2da63dc6eeae50f160a0809f3594c6374412a",
"5fe8e42d6bd31ec126812aa39f21b246a1807312"
],
"answer": [
{
"evidence": [
"As it is understandable from Tables TABREF16(classdistributionwaseem) and TABREF16(classdistributiondavidson), we are dealing with imbalance datasets with various classes’ distribution. Since hate speech and offensive languages are real phenomena, we did not perform oversampling or undersampling techniques to adjust the classes’ distribution and tried to supply the datasets as realistic as possible. We evaluate the effect of different fine-tuning strategies on the performance of our model. Table TABREF17 summarized the obtained results for fine-tuning strategies along with the official baselines. We use Waseem and Hovy BIBREF5, Davidson et al. BIBREF9, and Waseem et al. BIBREF10 as baselines and compare the results with our different fine-tuning strategies using pre-trained BERTbase model. The evaluation results are reported on the test dataset and on three different metrics: precision, recall, and weighted-average F1-score. We consider weighted-average F1-score as the most robust metric versus class imbalance, which gives insight into the performance of our proposed models. According to Table TABREF17, F1-scores of all BERT based fine-tuning strategies except BERT + nonlinear classifier on top of BERT are higher than the baselines. Using the pre-trained BERT model as initial embeddings and fine-tuning the model with a fully connected linear classifier (BERTbase) outperforms previous baselines yielding F1-score of 81% and 91% for datasets of Waseem and Davidson respectively. Inserting a CNN to pre-trained BERT model for fine-tuning on downstream task provides the best results as F1- score of 88% and 92% for datasets of Waseem and Davidson and it clearly exceeds the baselines. Intuitively, this makes sense that combining all pre-trained BERT layers with a CNN yields better results in which our model uses all the information included in different layers of pre-trained BERT during the fine-tuning phase. This information contains both syntactical and contextual features coming from lower layers to higher layers of BERT."
],
"extractive_spans": [
"Waseem and Hovy BIBREF5",
"Davidson et al. BIBREF9",
"Waseem et al. BIBREF10"
],
"free_form_answer": "",
"highlighted_evidence": [
"We use Waseem and Hovy BIBREF5, Davidson et al. BIBREF9, and Waseem et al. BIBREF10 as baselines and compare the results with our different fine-tuning strategies using pre-trained BERTbase model. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"As it is understandable from Tables TABREF16(classdistributionwaseem) and TABREF16(classdistributiondavidson), we are dealing with imbalance datasets with various classes’ distribution. Since hate speech and offensive languages are real phenomena, we did not perform oversampling or undersampling techniques to adjust the classes’ distribution and tried to supply the datasets as realistic as possible. We evaluate the effect of different fine-tuning strategies on the performance of our model. Table TABREF17 summarized the obtained results for fine-tuning strategies along with the official baselines. We use Waseem and Hovy BIBREF5, Davidson et al. BIBREF9, and Waseem et al. BIBREF10 as baselines and compare the results with our different fine-tuning strategies using pre-trained BERTbase model. The evaluation results are reported on the test dataset and on three different metrics: precision, recall, and weighted-average F1-score. We consider weighted-average F1-score as the most robust metric versus class imbalance, which gives insight into the performance of our proposed models. According to Table TABREF17, F1-scores of all BERT based fine-tuning strategies except BERT + nonlinear classifier on top of BERT are higher than the baselines. Using the pre-trained BERT model as initial embeddings and fine-tuning the model with a fully connected linear classifier (BERTbase) outperforms previous baselines yielding F1-score of 81% and 91% for datasets of Waseem and Davidson respectively. Inserting a CNN to pre-trained BERT model for fine-tuning on downstream task provides the best results as F1- score of 88% and 92% for datasets of Waseem and Davidson and it clearly exceeds the baselines. Intuitively, this makes sense that combining all pre-trained BERT layers with a CNN yields better results in which our model uses all the information included in different layers of pre-trained BERT during the fine-tuning phase. This information contains both syntactical and contextual features coming from lower layers to higher layers of BERT."
],
"extractive_spans": [
"Waseem and Hovy BIBREF5, Davidson et al. BIBREF9, and Waseem et al. BIBREF10"
],
"free_form_answer": "",
"highlighted_evidence": [
"We use Waseem and Hovy BIBREF5, Davidson et al. BIBREF9, and Waseem et al. BIBREF10 as baselines and compare the results with our different fine-tuning strategies using pre-trained BERTbase model. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"34dab457cb9578403c2817fb0608f5328184afd4",
"d1e08b1afd02f76ebb8e9c1fcbebac629d01160f"
],
"answer": [
{
"evidence": [
"1. BERT based fine-tuning: In the first approach, which is shown in Figure FIGREF8, very few changes are applied to the BERTbase. In this architecture, only the [CLS] token output provided by BERT is used. The [CLS] output, which is equivalent to the [CLS] token output of the 12th transformer encoder, a vector of size 768, is given as input to a fully connected network without hidden layer. The softmax activation function is applied to the hidden layer to classify.",
"2. Insert nonlinear layers: Here, the first architecture is upgraded and an architecture with a more robust classifier is provided in which instead of using a fully connected network without hidden layer, a fully connected network with two hidden layers in size 768 is used. The first two layers use the Leaky Relu activation function with negative slope = 0.01, but the final layer, as the first architecture, uses softmax activation function as shown in Figure FIGREF8.",
"3. Insert Bi-LSTM layer: Unlike previous architectures that only use [CLS] as the input for the classifier, in this architecture all outputs of the latest transformer encoder are used in such a way that they are given as inputs to a bidirectional recurrent neural network (Bi-LSTM) as shown in Figure FIGREF8. After processing the input, the network sends the final hidden state to a fully connected network that performs classification using the softmax activation function.",
"4. Insert CNN layer: In this architecture shown in Figure FIGREF8, the outputs of all transformer encoders are used instead of using the output of the latest transformer encoder. So that the output vectors of each transformer encoder are concatenated, and a matrix is produced. The convolutional operation is performed with a window of size (3, hidden size of BERT which is 768 in BERTbase model) and the maximum value is generated for each transformer encoder by applying max pooling on the convolution output. By concatenating these values, a vector is generated which is given as input to a fully connected network. By applying softmax on the input, the classification operation is performed."
],
"extractive_spans": [
"BERT based fine-tuning",
"Insert nonlinear layers",
"Insert Bi-LSTM layer",
"Insert CNN layer"
],
"free_form_answer": "",
"highlighted_evidence": [
"1. BERT based fine-tuning: In the first approach, which is shown in Figure FIGREF8, very few changes are applied to the BERTbase. In this architecture, only the [CLS] token output provided by BERT is used. The [CLS] output, which is equivalent to the [CLS] token output of the 12th transformer encoder, a vector of size 768, is given as input to a fully connected network without hidden layer. The softmax activation function is applied to the hidden layer to classify.\n\n2. Insert nonlinear layers: Here, the first architecture is upgraded and an architecture with a more robust classifier is provided in which instead of using a fully connected network without hidden layer, a fully connected network with two hidden layers in size 768 is used. The first two layers use the Leaky Relu activation function with negative slope = 0.01, but the final layer, as the first architecture, uses softmax activation function as shown in Figure FIGREF8.\n\n3. Insert Bi-LSTM layer: Unlike previous architectures that only use [CLS] as the input for the classifier, in this architecture all outputs of the latest transformer encoder are used in such a way that they are given as inputs to a bidirectional recurrent neural network (Bi-LSTM) as shown in Figure FIGREF8. After processing the input, the network sends the final hidden state to a fully connected network that performs classification using the softmax activation function.\n\n4. Insert CNN layer: In this architecture shown in Figure FIGREF8, the outputs of all transformer encoders are used instead of using the output of the latest transformer encoder. So that the output vectors of each transformer encoder are concatenated, and a matrix is produced. The convolutional operation is performed with a window of size (3, hidden size of BERT which is 768 in BERTbase model) and the maximum value is generated for each transformer encoder by applying max pooling on the convolution output. By concatenating these values, a vector is generated which is given as input to a fully connected network. By applying softmax on the input, the classification operation is performed."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Different layers of a neural network can capture different levels of syntactic and semantic information. The lower layer of the BERT model may contain more general information whereas the higher layers contain task-specific information BIBREF11, and we can fine-tune them with different learning rates. Here, four different fine-tuning approaches are implemented that exploit pre-trained BERTbase transformer encoders for our classification task. More information about these transformer encoders' architectures are presented in BIBREF11. In the fine-tuning phase, the model is initialized with the pre-trained parameters and then are fine-tuned using the labelled datasets. Different fine-tuning approaches on the hate speech detection task are depicted in Figure FIGREF8, in which $X_{i}$ is the vector representation of token $i$ in a tweet sample, and are explained in more detail as follows:",
"1. BERT based fine-tuning: In the first approach, which is shown in Figure FIGREF8, very few changes are applied to the BERTbase. In this architecture, only the [CLS] token output provided by BERT is used. The [CLS] output, which is equivalent to the [CLS] token output of the 12th transformer encoder, a vector of size 768, is given as input to a fully connected network without hidden layer. The softmax activation function is applied to the hidden layer to classify.",
"2. Insert nonlinear layers: Here, the first architecture is upgraded and an architecture with a more robust classifier is provided in which instead of using a fully connected network without hidden layer, a fully connected network with two hidden layers in size 768 is used. The first two layers use the Leaky Relu activation function with negative slope = 0.01, but the final layer, as the first architecture, uses softmax activation function as shown in Figure FIGREF8.",
"3. Insert Bi-LSTM layer: Unlike previous architectures that only use [CLS] as the input for the classifier, in this architecture all outputs of the latest transformer encoder are used in such a way that they are given as inputs to a bidirectional recurrent neural network (Bi-LSTM) as shown in Figure FIGREF8. After processing the input, the network sends the final hidden state to a fully connected network that performs classification using the softmax activation function.",
"4. Insert CNN layer: In this architecture shown in Figure FIGREF8, the outputs of all transformer encoders are used instead of using the output of the latest transformer encoder. So that the output vectors of each transformer encoder are concatenated, and a matrix is produced. The convolutional operation is performed with a window of size (3, hidden size of BERT which is 768 in BERTbase model) and the maximum value is generated for each transformer encoder by applying max pooling on the convolution output. By concatenating these values, a vector is generated which is given as input to a fully connected network. By applying softmax on the input, the classification operation is performed."
],
"extractive_spans": [
"BERT based fine-tuning",
"Insert nonlinear layers",
"Insert Bi-LSTM layer",
"Insert CNN layer"
],
"free_form_answer": "",
"highlighted_evidence": [
"Here, four different fine-tuning approaches are implemented that exploit pre-trained BERTbase transformer encoders for our classification task.",
"1. BERT based fine-tuning: In the first approach, which is shown in Figure FIGREF8, very few changes are applied to the BERTbase. ",
"2. Insert nonlinear layers: Here, the first architecture is upgraded and an architecture with a more robust classifier is provided in which instead of using a fully connected network without hidden layer, a fully connected network with two hidden layers in size 768 is used. ",
"3. Insert Bi-LSTM layer: Unlike previous architectures that only use [CLS] as the input for the classifier, in this architecture all outputs of the latest transformer encoder are used in such a way that they are given as inputs to a bidirectional recurrent neural network (Bi-LSTM) as shown in Figure FIGREF8.",
"4. Insert CNN layer: In this architecture shown in Figure FIGREF8, the outputs of all transformer encoders are used instead of using the output of the latest transformer encoder."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"2c7b7b7e768e4953454c9809fbcc7bed71d0eb3a",
"8bdcbe6cd9d1ad1dfbf5502d2911312655a3f53f"
],
"answer": [
{
"evidence": [
"As one of the first attempts in neural network models, Djuric et al. BIBREF16 proposed a two-step method including a continuous bag of words model to extract paragraph2vec embeddings and a binary classifier trained along with the embeddings to distinguish between hate speech and clean content. Badjatiya et al. BIBREF0 investigated three deep learning architectures, FastText, CNN, and LSTM, in which they initialized the word embeddings with either random or GloVe embeddings. Gambäck et al. BIBREF6 proposed a hate speech classifier based on CNN model trained on different feature embeddings such as word embeddings and character $n$-grams. Zhang et al. BIBREF7 used a CNN+GRU (Gated Recurrent Unit network) neural network model initialized with pre-trained word2vec embeddings to capture both word/character combinations (e. g., $n$-grams, phrases) and word/character dependencies (order information). Waseem et al. BIBREF10 brought a new insight to hate speech and abusive language detection tasks by proposing a multi-task learning framework to deal with datasets across different annotation schemes, labels, or geographic and cultural influences from data sampling. Founta et al. BIBREF17 built a unified classification model that can efficiently handle different types of abusive language such as cyberbullying, hate, sarcasm, etc. using raw text and domain-specific metadata from Twitter. Furthermore, researchers have recently focused on the bias derived from the hate speech training datasets BIBREF18, BIBREF2, BIBREF19. Davidson et al. BIBREF2 showed that there were systematic and substantial racial biases in five benchmark Twitter datasets annotated for offensive language detection. Wiegand et al. BIBREF19 also found that classifiers trained on datasets containing more implicit abuse (tweets with some abusive words) are more affected by biases rather than once trained on datasets with a high proportion of explicit abuse samples (tweets containing sarcasm, jokes, etc.).",
"By examining more samples and with respect to recently studies BIBREF2, BIBREF24, BIBREF19, it is clear that many errors are due to biases from data collection BIBREF19 and rules of annotation BIBREF24 and not the classifier itself. Since Waseem et al.BIBREF5 created a small ad-hoc set of keywords and Davidson et al.BIBREF9 used a large crowdsourced dictionary of keywords (Hatebase lexicon) to sample tweets for training, they included some biases in the collected data. Especially for Davidson-dataset, some tweets with specific language (written within the African American Vernacular English) and geographic restriction (United States of America) are oversampled such as tweets containing disparage words “nigga\", “faggot\", “coon\", or “queer\", result in high rates of misclassification. However, these misclassifications do not confirm the low performance of our classifier because annotators tended to annotate many samples containing disrespectful words as hate or offensive without any presumption about the social context of tweeters such as the speaker’s identity or dialect, whereas they were just offensive or even neither tweets. Tweets IDs 6, 8, and 10 are some samples containing offensive words and slurs which arenot hate or offensive in all cases and writers of them used this type of language in their daily communications. Given these pieces of evidence, by considering the content of tweets, we can see in tweets IDs 3, 4, and 9 that our BERT-based classifier can discriminate tweets in which neither and implicit hatred content exist. One explanation of this observation may be the pre-trained general knowledge that exists in our model. Since the pre-trained BERT model is trained on general corpora, it has learned general knowledge from normal textual data without any purposely hateful or offensive language. Therefore, despite the bias in the data, our model can differentiate hate and offensive samples accurately by leveraging knowledge-aware language understanding that it has and it can be the main reason for high misclassifications of hate samples as offensive (in reality they are more similar to offensive rather than hate by considering social context, geolocation, and dialect of tweeters)."
],
"extractive_spans": [
"systematic and substantial racial biases",
"biases from data collection",
"rules of annotation"
],
"free_form_answer": "",
"highlighted_evidence": [
"Davidson et al. BIBREF2 showed that there were systematic and substantial racial biases in five benchmark Twitter datasets annotated for offensive language detection.",
"By examining more samples and with respect to recently studies BIBREF2, BIBREF24, BIBREF19, it is clear that many errors are due to biases from data collection BIBREF19 and rules of annotation BIBREF24 and not the classifier itself. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"As one of the first attempts in neural network models, Djuric et al. BIBREF16 proposed a two-step method including a continuous bag of words model to extract paragraph2vec embeddings and a binary classifier trained along with the embeddings to distinguish between hate speech and clean content. Badjatiya et al. BIBREF0 investigated three deep learning architectures, FastText, CNN, and LSTM, in which they initialized the word embeddings with either random or GloVe embeddings. Gambäck et al. BIBREF6 proposed a hate speech classifier based on CNN model trained on different feature embeddings such as word embeddings and character $n$-grams. Zhang et al. BIBREF7 used a CNN+GRU (Gated Recurrent Unit network) neural network model initialized with pre-trained word2vec embeddings to capture both word/character combinations (e. g., $n$-grams, phrases) and word/character dependencies (order information). Waseem et al. BIBREF10 brought a new insight to hate speech and abusive language detection tasks by proposing a multi-task learning framework to deal with datasets across different annotation schemes, labels, or geographic and cultural influences from data sampling. Founta et al. BIBREF17 built a unified classification model that can efficiently handle different types of abusive language such as cyberbullying, hate, sarcasm, etc. using raw text and domain-specific metadata from Twitter. Furthermore, researchers have recently focused on the bias derived from the hate speech training datasets BIBREF18, BIBREF2, BIBREF19. Davidson et al. BIBREF2 showed that there were systematic and substantial racial biases in five benchmark Twitter datasets annotated for offensive language detection. Wiegand et al. BIBREF19 also found that classifiers trained on datasets containing more implicit abuse (tweets with some abusive words) are more affected by biases rather than once trained on datasets with a high proportion of explicit abuse samples (tweets containing sarcasm, jokes, etc.).",
"By examining more samples and with respect to recently studies BIBREF2, BIBREF24, BIBREF19, it is clear that many errors are due to biases from data collection BIBREF19 and rules of annotation BIBREF24 and not the classifier itself. Since Waseem et al.BIBREF5 created a small ad-hoc set of keywords and Davidson et al.BIBREF9 used a large crowdsourced dictionary of keywords (Hatebase lexicon) to sample tweets for training, they included some biases in the collected data. Especially for Davidson-dataset, some tweets with specific language (written within the African American Vernacular English) and geographic restriction (United States of America) are oversampled such as tweets containing disparage words “nigga\", “faggot\", “coon\", or “queer\", result in high rates of misclassification. However, these misclassifications do not confirm the low performance of our classifier because annotators tended to annotate many samples containing disrespectful words as hate or offensive without any presumption about the social context of tweeters such as the speaker’s identity or dialect, whereas they were just offensive or even neither tweets. Tweets IDs 6, 8, and 10 are some samples containing offensive words and slurs which arenot hate or offensive in all cases and writers of them used this type of language in their daily communications. Given these pieces of evidence, by considering the content of tweets, we can see in tweets IDs 3, 4, and 9 that our BERT-based classifier can discriminate tweets in which neither and implicit hatred content exist. One explanation of this observation may be the pre-trained general knowledge that exists in our model. Since the pre-trained BERT model is trained on general corpora, it has learned general knowledge from normal textual data without any purposely hateful or offensive language. Therefore, despite the bias in the data, our model can differentiate hate and offensive samples accurately by leveraging knowledge-aware language understanding that it has and it can be the main reason for high misclassifications of hate samples as offensive (in reality they are more similar to offensive rather than hate by considering social context, geolocation, and dialect of tweeters)."
],
"extractive_spans": [],
"free_form_answer": "sampling tweets from specific keywords create systematic and substancial racial biases in datasets",
"highlighted_evidence": [
"Davidson et al. BIBREF2 showed that there were systematic and substantial racial biases in five benchmark Twitter datasets annotated for offensive language detection. Wiegand et al. BIBREF19 also found that classifiers trained on datasets containing more implicit abuse (tweets with some abusive words) are more affected by biases rather than once trained on datasets with a high proportion of explicit abuse samples (tweets containing sarcasm, jokes, etc.).\n\n",
"By examining more samples and with respect to recently studies BIBREF2, BIBREF24, BIBREF19, it is clear that many errors are due to biases from data collection BIBREF19 and rules of annotation BIBREF24 and not the classifier itself. Since Waseem et al.BIBREF5 created a small ad-hoc set of keywords and Davidson et al.BIBREF9 used a large crowdsourced dictionary of keywords (Hatebase lexicon) to sample tweets for training, they included some biases in the collected data. Especially for Davidson-dataset, some tweets with specific language (written within the African American Vernacular English) and geographic restriction (United States of America) are oversampled such as tweets containing disparage words “nigga\", “faggot\", “coon\", or “queer\", result in high rates of misclassification. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"f83e90e59d7a1f7a5079f48d1c4c3652136759d6"
],
"answer": [
{
"evidence": [
"By examining more samples and with respect to recently studies BIBREF2, BIBREF24, BIBREF19, it is clear that many errors are due to biases from data collection BIBREF19 and rules of annotation BIBREF24 and not the classifier itself. Since Waseem et al.BIBREF5 created a small ad-hoc set of keywords and Davidson et al.BIBREF9 used a large crowdsourced dictionary of keywords (Hatebase lexicon) to sample tweets for training, they included some biases in the collected data. Especially for Davidson-dataset, some tweets with specific language (written within the African American Vernacular English) and geographic restriction (United States of America) are oversampled such as tweets containing disparage words “nigga\", “faggot\", “coon\", or “queer\", result in high rates of misclassification. However, these misclassifications do not confirm the low performance of our classifier because annotators tended to annotate many samples containing disrespectful words as hate or offensive without any presumption about the social context of tweeters such as the speaker’s identity or dialect, whereas they were just offensive or even neither tweets. Tweets IDs 6, 8, and 10 are some samples containing offensive words and slurs which arenot hate or offensive in all cases and writers of them used this type of language in their daily communications. Given these pieces of evidence, by considering the content of tweets, we can see in tweets IDs 3, 4, and 9 that our BERT-based classifier can discriminate tweets in which neither and implicit hatred content exist. One explanation of this observation may be the pre-trained general knowledge that exists in our model. Since the pre-trained BERT model is trained on general corpora, it has learned general knowledge from normal textual data without any purposely hateful or offensive language. Therefore, despite the bias in the data, our model can differentiate hate and offensive samples accurately by leveraging knowledge-aware language understanding that it has and it can be the main reason for high misclassifications of hate samples as offensive (in reality they are more similar to offensive rather than hate by considering social context, geolocation, and dialect of tweeters)."
],
"extractive_spans": [],
"free_form_answer": "Data annotation biases where tweet containing disrespectful words are annotated as hate or offensive without any presumption about the social context of tweeters",
"highlighted_evidence": [
"Especially for Davidson-dataset, some tweets with specific language (written within the African American Vernacular English) and geographic restriction (United States of America) are oversampled such as tweets containing disparage words “nigga\", “faggot\", “coon\", or “queer\", result in high rates of misclassification. However, these misclassifications do not confirm the low performance of our classifier because annotators tended to annotate many samples containing disrespectful words as hate or offensive without any presumption about the social context of tweeters such as the speaker’s identity or dialect, whereas they were just offensive or even neither tweets. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"74e72dd8b2c910147f0e62814471f5f72c55a819",
"b22ef8570a53322bacc2f06f400dbf1e1ecc742d"
],
"answer": [
{
"evidence": [
"As it is understandable from Tables TABREF16(classdistributionwaseem) and TABREF16(classdistributiondavidson), we are dealing with imbalance datasets with various classes’ distribution. Since hate speech and offensive languages are real phenomena, we did not perform oversampling or undersampling techniques to adjust the classes’ distribution and tried to supply the datasets as realistic as possible. We evaluate the effect of different fine-tuning strategies on the performance of our model. Table TABREF17 summarized the obtained results for fine-tuning strategies along with the official baselines. We use Waseem and Hovy BIBREF5, Davidson et al. BIBREF9, and Waseem et al. BIBREF10 as baselines and compare the results with our different fine-tuning strategies using pre-trained BERTbase model. The evaluation results are reported on the test dataset and on three different metrics: precision, recall, and weighted-average F1-score. We consider weighted-average F1-score as the most robust metric versus class imbalance, which gives insight into the performance of our proposed models. According to Table TABREF17, F1-scores of all BERT based fine-tuning strategies except BERT + nonlinear classifier on top of BERT are higher than the baselines. Using the pre-trained BERT model as initial embeddings and fine-tuning the model with a fully connected linear classifier (BERTbase) outperforms previous baselines yielding F1-score of 81% and 91% for datasets of Waseem and Davidson respectively. Inserting a CNN to pre-trained BERT model for fine-tuning on downstream task provides the best results as F1- score of 88% and 92% for datasets of Waseem and Davidson and it clearly exceeds the baselines. Intuitively, this makes sense that combining all pre-trained BERT layers with a CNN yields better results in which our model uses all the information included in different layers of pre-trained BERT during the fine-tuning phase. This information contains both syntactical and contextual features coming from lower layers to higher layers of BERT."
],
"extractive_spans": [
"Waseem and Hovy BIBREF5, Davidson et al. BIBREF9, and Waseem et al. BIBREF10"
],
"free_form_answer": "",
"highlighted_evidence": [
"We use Waseem and Hovy BIBREF5, Davidson et al. BIBREF9, and Waseem et al. BIBREF10 as baselines and compare the results with our different fine-tuning strategies using pre-trained BERTbase model. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"As it is understandable from Tables TABREF16(classdistributionwaseem) and TABREF16(classdistributiondavidson), we are dealing with imbalance datasets with various classes’ distribution. Since hate speech and offensive languages are real phenomena, we did not perform oversampling or undersampling techniques to adjust the classes’ distribution and tried to supply the datasets as realistic as possible. We evaluate the effect of different fine-tuning strategies on the performance of our model. Table TABREF17 summarized the obtained results for fine-tuning strategies along with the official baselines. We use Waseem and Hovy BIBREF5, Davidson et al. BIBREF9, and Waseem et al. BIBREF10 as baselines and compare the results with our different fine-tuning strategies using pre-trained BERTbase model. The evaluation results are reported on the test dataset and on three different metrics: precision, recall, and weighted-average F1-score. We consider weighted-average F1-score as the most robust metric versus class imbalance, which gives insight into the performance of our proposed models. According to Table TABREF17, F1-scores of all BERT based fine-tuning strategies except BERT + nonlinear classifier on top of BERT are higher than the baselines. Using the pre-trained BERT model as initial embeddings and fine-tuning the model with a fully connected linear classifier (BERTbase) outperforms previous baselines yielding F1-score of 81% and 91% for datasets of Waseem and Davidson respectively. Inserting a CNN to pre-trained BERT model for fine-tuning on downstream task provides the best results as F1- score of 88% and 92% for datasets of Waseem and Davidson and it clearly exceeds the baselines. Intuitively, this makes sense that combining all pre-trained BERT layers with a CNN yields better results in which our model uses all the information included in different layers of pre-trained BERT during the fine-tuning phase. This information contains both syntactical and contextual features coming from lower layers to higher layers of BERT."
],
"extractive_spans": [
"Waseem and Hovy BIBREF5",
"Davidson et al. BIBREF9",
"Waseem et al. BIBREF10"
],
"free_form_answer": "",
"highlighted_evidence": [
"We use Waseem and Hovy BIBREF5, Davidson et al. BIBREF9, and Waseem et al. BIBREF10 as baselines and compare the results with our different fine-tuning strategies using pre-trained BERTbase model. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five",
"five",
"five",
"",
""
],
"paper_read": [
"no",
"no",
"no",
"no",
"no",
"no",
"",
""
],
"question": [
"Do they report results only on English data?",
"What evidence do the authors present that the model can capture some biases in data annotation and collection?",
"Which publicly available datasets are used?",
"What baseline is used?",
"What new fine-tuning methods are presented?",
"What are the existing biases?",
"What biases does their model capture?",
"What existing approaches do they compare to?"
],
"question_id": [
"8748e8f64af57560d124c7b518b853bf2711c13e",
"893ec40b678a72760b6802f6abf73b8f487ae639",
"c81f215d457bdb913a5bade2b4283f19c4ee826c",
"e101e38efaa4b931f7dd75757caacdc945bb32b4",
"afb77b11da41cd0edcaa496d3f634d18e48d7168",
"41b2355766a4260f41b477419d44c3fd37f3547d",
"96a4091f681872e6d98d0efee777d9e820cb8dae",
"81a35b9572c9d574a30cc2164f47750716157fc8"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"twitter",
"twitter",
"twitter",
"twitter",
"twitter",
"twitter",
"social media",
"social media"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"",
""
]
} | {
"caption": [
"Fig. 1: Fine-tuning strategies",
"Table 1: Dataset statistics of the both Waseem-dataset (a) and Davidson-dataset (b). Splits are produced using stratified sampling to select 0.8, 0.1, and 0.1 portions of tweets from each class (racism/sexism/neither or hate/offensive/neither) for train, validation, and test samples, respectively.",
"Table 2: Results on the trial data using pre-trained BERT model with different finetuning strategies and comparison with results in the literature.",
"Fig. 2: Waseem-datase’s confusion matrix Fig. 3: Davidson-dataset’s confusion matrix"
],
"file": [
"6-Figure1-1.png",
"8-Table1-1.png",
"8-Table2-1.png",
"9-Figure2-1.png"
]
} | [
"What evidence do the authors present that the model can capture some biases in data annotation and collection?",
"What are the existing biases?",
"What biases does their model capture?"
] | [
[
"1910.12574-Experiments and Results ::: Error Analysis-2"
],
[
"1910.12574-Previous Works-2",
"1910.12574-Experiments and Results ::: Error Analysis-2"
],
[
"1910.12574-Experiments and Results ::: Error Analysis-2"
]
] | [
"The authors showed few tweets where neither and implicit hatred content exist but the model was able to discriminate",
"sampling tweets from specific keywords create systematic and substancial racial biases in datasets",
"Data annotation biases where tweet containing disrespectful words are annotated as hate or offensive without any presumption about the social context of tweeters"
] | 158 |
1702.03342 | Learning Concept Embeddings for Efficient Bag-of-Concepts Densification | Explicit concept space models have proven efficacy for text representation in many natural language and text mining applications. The idea is to embed textual structures into a semantic space of concepts which captures the main topics of these structures. That so called bag-of-concepts representation suffers from data sparsity causing low similarity scores between similar texts due to low concept overlap. In this paper we propose two neural embedding models in order to learn continuous concept vectors. Once learned, we propose an efficient vector aggregation method to generate fully dense bag-of-concepts representations. Empirical results on a benchmark dataset for measuring entity semantic relatedness show superior performance over other concept embedding models. In addition, by utilizing our efficient aggregation method, we demonstrate the effectiveness of the densified vector representation over the typical sparse representations for dataless classification where we can achieve at least same or better accuracy with much less dimensions. | {
"paragraphs": [
[
"Vector-based semantic mapping models are used to represent textual structures (words, phrases, and documents) as high-dimensional meaning vectors. Typically, these models utilize textual corpora and/or Knowledge Bases (KBs) to acquire world knowledge, which is then used to generate a vector representation for the given text in the semantic space. The goal is thus to accurately place semantically similar structures close to each other in that semantic space. On the other hand, dissimilar structures should be far apart.",
"Explicit concept space models are motivated by the idea that high level cognitive tasks such learning and reasoning are supported by the knowledge we acquire from concepts BIBREF0 . Therefore, such models utilize concept vectors (a.k.a bag-of-concepts (BOC)) as the underlying semantic representation of a given text through a process called conceptualization, which is mapping the text into relevant concepts capturing its main topics. The concept space typically include concepts obtained from KBs such as Wikipedia, Probase BIBREF1 , and others. Once the concept vectors are generated, similarity between two concept vectors can be computed using a suitable similarity/distance measure such as cosine.",
"The BOC representation has proven efficacy for semantic analysis of textual data especially short texts where contextual information is missing or insufficient. For example, measuring semantic similarity/relatedness BIBREF2 , BIBREF3 , BIBREF4 , dataless classification BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , short text clustering BIBREF0 , search and relevancy ranking BIBREF9 , event detection and coreference resolution BIBREF10 .",
"Similar to the traditional bag-of-words representation, the BOC vector is a high dimensional sparse vector whose dimensionality is the same as the number of concepts in the employed KB (typically millions). Consequently, it suffers from data sparsity causing low similarity scores between similar texts due to low concept overlap. Formally, given a text snippet INLINEFORM0 of INLINEFORM1 terms where INLINEFORM2 , and a concept space INLINEFORM3 of size INLINEFORM4 . The BOC vector INLINEFORM5 = INLINEFORM6 of INLINEFORM7 is a vector of weights of each concept where each INLINEFORM8 of concept INLINEFORM9 is calculated as in equation 1: DISPLAYFORM0 ",
"Here INLINEFORM0 is a scoring function which indicates the degree of association between term INLINEFORM1 and concept INLINEFORM2 . For example, gabrilovich2007computing proposed Explicit Semantic Analysis (ESA) which uses Wikipedia articles as concepts and the TF-IDF score of the terms in these article as the association score. Another scoring function might be the co-occurrence count or Pearson correlation score between INLINEFORM3 and INLINEFORM4 . As we can notice, only very small subset of the concept space would have non-zero scores with the given terms. Moreover, the BOC vector is generated from the INLINEFORM5 concepts which have relatively high association scores with the input terms (typically few hundreds). Thus each text snippet is mapped to a very sparse vector of millions of dimensions having only few hundreds non-zero values BIBREF10 .",
"Typically, the cosine similarity measure is used compute the similarity between a pair of BOC vectors INLINEFORM0 and INLINEFORM1 . Because the concept vectors are very sparse, we can rewrite each vector as a vector of tuples INLINEFORM2 . Suppose that INLINEFORM3 and INLINEFORM4 , where INLINEFORM5 and INLINEFORM6 are the corresponding weights of concepts INLINEFORM7 and INLINEFORM8 respectively. And INLINEFORM9 , INLINEFORM10 are the indices of these concepts in the concept space INLINEFORM11 such that INLINEFORM12 . Then, the relatedness score can be written as in equation 2: DISPLAYFORM0 ",
"where INLINEFORM0 is the indicator function which returns 1 if INLINEFORM1 and 0 otherwise. Having such sparse representation and using exact match similarity scoring measure, we can expect that two very similar text snippets might have zero similarity score if they map to different but very related set of concepts BIBREF7 .",
"Neural embedding models have been proposed to overcome the BOC sparsity problem. The basic idea is to learn fixed size continuous vectors for each concept. These vectors can then be used to compute concept-concept similarity and thus overcome the concept mismatch problem.",
"In this paper we propose two neural embedding models in order to learn continuous concept vectors based on the skip-gram model BIBREF11 . Our first model is the Concept Raw Context model (CRC) which utilizes concept mentions in a large scale KB to jointly learn embeddings of both words and concepts. Our second model is the Concept-Concept Context model (3C) which learns the embeddings of concepts from their conceptual contexts (i.e., contexts containing surrounding concepts only). After learning the concept vectors, we propose an efficient concept vector aggregation method to generate fully dense BOC representations. Our efficient aggregation method allows measuring the similarity between pairs of BOC vectors in linear time. This is more efficient than prior methods which require quadratic time or at least log-linear time if optimized (see equation 2).",
"We evaluate our embedding models on two tasks:",
"The contributions of this paper are threefold: First, we propose two low cost concept embedding models which requires few hours rather than days to train. Second, we propose simple and efficient vector aggregation method to obtain fully densified BOC vectors in linear time. Third, we demonstrate through experiments that we can obtain same or better accuracy using the densified BOC representation with much less dimensions (few in most cases), reducing the computational cost of generating the BOC vector significantly."
],
[
"Concept/Entity Embeddings: neural embedding models have been proposed to learn distributed representations of concepts/entities. songunsupervised proposed using the popular Word2Vec model BIBREF12 to obtain the embeddings of each concept by averaging the vectors of the concept's individual words. For example, the embeddings of Microsoft Office would be obtained by averaging the embeddings of Microsoft and Office obtained from the Word2Vec model. Clearly, this method disregards the fact that the semantics of multi-word concepts is different from the semantics of their individual words. More robust concept embeddings can be learned from the concept's corresponding article and/or from the structure of the employed KB (e.g., its link graph). Such concept embedding models were proposed by hu2015entity,li2016joint,yamada2016joint who all utilize the skip-gram model BIBREF11 , but differ in how they define the context of the target concept.",
"li2016joint extended the embedding model proposed by hu2015entity by jointly learning concept and category embeddings from contexts defined by all other concepts in the target concept's article as well as its category hierarchy in Wikipedia. This method has the advantage of learning embeddings of both concepts and categories simultaneously. However, defining the concept contexts as pairs of the target concept and all other concepts appearing in its corresponding article might introduce noisy contexts, especially for long articles. For example, the Wikipedia article for United States contains links to Kindergarten, First grade, and Secondary school under the Education section.",
"yamada2016joint proposed a method based on the skip-gram model to jointly learn embeddings of words and concepts using contexts generated from surrounding words of the target concept or word. The authors also proposed incorporating the KB link graph by generating contexts from all concepts with outgoing link to the target concept to better model concept-concept relatedness.",
"Unlike li2016joint and hu2015entity who learn concept embeddings only, our CRC model (described in Section 3), maps both words and concepts into the same semantic space. Therefore we can easily measure word-word, word-concept, and concept-concept semantic similarities. In addition, compared to yamada2016joint model, we utilize contexts generated from both surrounding words and concepts. Therefore, we can better capture local contextual information of each target word/concept. Moreover, our proposed models are computationally less costly than hu2015entity and yamada2016joint models as they require few hours rather than days to train on similar computing resources.",
"BOC Densification: distributed concept vectors have been used by BOC densification mechanisms to overcome the BOC sparsity problem. songunsupervised proposed three different mechanisms for aligning the concepts at different indices given a sparse BOC pair ( INLINEFORM0 ) in order to increase their similarity score.",
"The many-to-many mechanism works by averaging all pairwise similarities. The many-to-one mechanism works by aligning each concept in INLINEFORM0 with the most similar concept in INLINEFORM1 (i.e., its best match). Clearly, the complexity of these two mechanisms is quadratic. The third mechanism is the one-to-one. It utilizes the Hungarian method in order to find an optimal alignment on a one-to-one basis BIBREF13 . This mechanism performed the best on dataless classification and was also utilized by li2016joint. However, the Hungarian method is a combinatorial optimization algorithm whose complexity is polynomial. Our proposed densification mechanism is more efficient than these three mechanisms as its complexity is linear with respect to the number of non-zero elements in the BOC vector. Additionally, it is simpler as it does not require tuning a cut off threshold for the minimum similarity between two aligned concepts as in prior work."
],
[
"A main objective of learning concept embeddings is to overcome the inherent problem of data sparsity associated with the BOC representation. Here we try to learn continuous concept vectors by building upon the skip-gram embedding model BIBREF11 . In the conventional skip-gram model, a set of contexts are generated by sliding a context window of predefined size over sentences of a given text corpus. Vector representation of a target word is learned with the objective to maximize the ability of predicting surrounding words of that target word.",
"Formally, given a training corpus of INLINEFORM0 words INLINEFORM1 . The skip-gram model aims to maximize the average log probability: DISPLAYFORM0 ",
"where INLINEFORM0 is the context window size, INLINEFORM1 is the target word, and INLINEFORM2 is a surrounding context word. The softmax function is used to estimate the probability INLINEFORM3 as follows: DISPLAYFORM0 ",
"where INLINEFORM0 and INLINEFORM1 are the input and output vectors respectively and INLINEFORM2 is the vocabulary size. mikolov2013distributed proposed hierarchical softmax and negative sampling as efficient alternatives to approximate the softmax function which becomes computationally intractable when INLINEFORM3 becomes huge.",
"Our approach genuinely learns distributed concept representations by generating concept contexts from mentions of those concepts in large encyclopedic KBs such as Wikipedia. Utilizing such annotated KBs eliminates the need to manually annotate concept mentions and thus comes at no cost."
],
[
"In this model, we jointly learn the embeddings of both words and concepts. First, all concept mentions are identified in the given corpus. Second, contexts are generated for both words and concepts from both other surrounding words and other surrounding concepts as well. After generating all the contexts, we use the skip-gram model to jointly learn words and concepts embeddings. Formally, given a training corpus of INLINEFORM0 words INLINEFORM1 . We iterate over the corpus identifying words and concept mentions and thus generating a sequence of INLINEFORM2 tokens INLINEFORM3 where INLINEFORM4 (as multi-word concepts will be counted as one token). Afterwards we train the a skip-gram model aiming to maximize: DISPLAYFORM0 ",
"where as in the conventional skip-gram model, INLINEFORM0 is the context window size. In this model, INLINEFORM1 is the target token which would be either a word or a concept mention, and INLINEFORM2 is a surrounding context word or concept mention.",
"This model is different from yamada2016joint anchor context model in three aspects: 1) while generating target concept contexts, we utilize not only surrounding words but other surrounding concepts as well, 2) our model aims to maximize INLINEFORM0 where INLINEFORM1 could be a word or a concept, while yamada2016joint model maximizes INLINEFORM2 where INLINEFORM3 is the target concept/entity (see yamada2016joint Eq. 6), and 3) in case INLINEFORM4 is a concept, our model captures all the contexts in which it appeared, while yamada2016joint model generates for each entity one context of INLINEFORM5 previous and INLINEFORM6 next words. We hypothesize that considering both concepts and individual words in the optimization function would generate more robust embeddings."
],
[
"Inspired by the distributional hypothesis BIBREF14 , we, in this model, hypothesize that \"similar concepts tend to appear in similar conceptual contexts\". In order to test this hypothesis, we learn concept embeddings by training a skip-gram model on contexts generated solely from concept mentions. As in the CRC model, we start by identifying all concept mentions in the given corpus. Then, contexts are generated from only surrounding concepts. Formally, given a training corpus of INLINEFORM0 words INLINEFORM1 . We iterate over the corpus identifying concept mentions and thus generating a sequence of INLINEFORM2 concept tokens INLINEFORM3 where INLINEFORM4 . Afterwards we train the skip-gram model aiming to maximize: DISPLAYFORM0 ",
"where INLINEFORM0 is the context window size, INLINEFORM1 is the target concept, and INLINEFORM2 is a surrounding concept mention within INLINEFORM3 mentions.",
"This model is different from li2016joint and hu2015entity as they define the context of a target concept by all the other concepts which appear in the concept's corresponding article. Clearly, some of these concepts might be irrelevant especially for very long articles which cite hundreds of other concepts. Our 3C model, alternatively, learns concept semantics from surrounding concepts and not only from those that are cited in its article. We also extend the context window beyond pairs of concepts allowing more influence to other nearby concepts.",
"The main advantage of the 3C model over the CRC model is its computational efficiency where the model vocabulary is limited to the corpus concepts (Wikipedia in our case). One the other hand, the CRC model is advantageous because it jointly learns the embeddings of words and concepts and is therefore expected to generate higher quality vectors. In other words, the CRC model can capture more contextual signals such as actions, times, and relationships at the expense of training computational cost."
],
[
"We utilize a recent Wikipedia dump of August 2016, which has about 7 million articles. We extract articles plain text discarding images and tables. We also discard References and External links sections (if any). We pruned both articles not under the main namespace and pruned all redirect pages as well. Eventually, our corpus contained about 5 million articles in total.",
"We preprocess each article replacing all its references to other Wikipedia articles with the their corresponding page IDs. In case any of the references is a title of a redirect page, we use the page ID of the original page to ensure that all concept mentions are normalized.",
"Following mikolov2013distributed, we utilize negative sampling to approximate the softmax function by replacing every INLINEFORM0 term in the softmax function (equation 4) with: DISPLAYFORM0 ",
"where INLINEFORM0 is the number of negative samples drawn for each word and INLINEFORM1 is the sigmoid function ( INLINEFORM2 ). In the case of the CRC model INLINEFORM3 and INLINEFORM4 would be replaced with INLINEFORM5 and INLINEFORM6 respectively. And in the case of the 3C model INLINEFORM7 and INLINEFORM8 would be replaced with INLINEFORM9 and INLINEFORM10 respectively.",
"For both the CRC & 3C models with use a context window of size 9 and a vector of 500 dimensions. We train the skip-gram model for 10 iterations using 12 cores machine with 64GB of RAM. The CRC model took INLINEFORM0 15 hours to train for a total of INLINEFORM1 12.7 million tokens. The 3C model took INLINEFORM2 1.5 hours to train for a total of INLINEFORM3 4.5 million concepts."
],
[
"As we mentioned in the related work section, the current mechanisms for BOC densification are inefficient as their complexity is least quadratic with respect to the number of non-zero elements in the BOC vector. Here, we propose simple and efficient vector aggregation method to obtain fully densified BOC vectors in linear time. Our mechanism works by performing a weighted average of the embedding vectors of all concepts in the given BOC. This operation scales linearly with the number of non-zero dimensions in the BOC vector. In addition, it produces a fully dense vector representing the semantics of the original concepts and considering their weights. Formally, given a sparse BOC vector INLINEFORM0 where INLINEFORM1 is weight of concept INLINEFORM2 . We can obtain the dense representation of INLINEFORM3 as in equation 8: DISPLAYFORM0 ",
"where INLINEFORM0 is the embedding vector of concept INLINEFORM1 . Once we have this dense BOC vector, we can apply the cosine measure to compute the similarity between a pair of dense BOC vectors.",
"As we can notice, this weighted average is done once and for all for a given BOC vector. Other mechanisms that rely on concept alignment BIBREF7 , require realignment every time a given BOC vector is compared to another BOC vector. Our approach improves the efficiency especially in the context of dataless classification with large number of classes. Using our densification mechanism, we apply the weighted average for each class vector and for each instance once.",
"Interestingly, our densification mechanism allows us to densify the sparse BOC vector using only the top few dimensions. As we will show in the experiments section, we can get (near-)best results using these few dimensions compared to densifying with all the dimensions in the original sparse vector. This property reduces the cost of obtaining a BOC vector with a few hundred dimensions in the first place."
],
[
"We evaluate the \"goodness\" of our concept embeddings on measuring entity semantic relatedness as an intrinsic evaluation. Entity relatedness has been recently used to model entity coherence in many named entity disambiguation systems.",
"We use a benchmark dataset created by ceccarelli2013learning from the CoNLL 2003 data. As in previous studies BIBREF16 , BIBREF15 , we model measuring entity relatedness as a ranking problem. We use the test split of the dataset to create 3,314 queries. Each query has a query entity and INLINEFORM0 91 response entities labeled as related or unrelated. The quality is measured by the ability of the system to rank related entities on top of unrelated ones.",
"We compare our models with two prior methods:",
"yamada2016joint who used the skip-gram model to learn embeddings of words and entities jointly. The authors also utilized Wikipedia link graph to better model entity-entity relatedness.",
"witten2008effective who proposed Wikipedia Link-based Measure (WLM) as a simple mechanism for modeling the semantic relatedness between Wikipedia concepts. The authors utilized Wikipedia link structure under the assumption that related concepts would have similar incoming links.",
"We report Mean Average Precision (MAP) and Normalized Discounted Cumulative Gain (nDCG) scores as commonly used measures for evaluating the ranking quality. Table 1 shows the performance of our CRC and 3C models compared to previous models. As we can see, the 3C model performs poorly on this task compared to prior models. On the other hand, our CRC model outperforms all the other methods by 2-4% in terms of nDCG and by 3% percent in terms of MAP."
],
[
"chang2008importance proposed dataless classification as a learning protocol to perform text categorization without the need for labeled data to train a classifier. Given only label names and few descriptive keywords of each label, classification is performed on the fly by mapping each label into a BOC representation using ESA. Likewise, each data instance is mapped into the same BOC semantic space and assigned to the most similar label using a proper similarity measure such as cosine. Formally, given a set of INLINEFORM0 labels INLINEFORM1 , a text document INLINEFORM2 , a BOC mapping model INLINEFORM3 , and a similarity function INLINEFORM4 , then INLINEFORM5 is assigned to the INLINEFORM6 th label INLINEFORM7 such that INLINEFORM8 . We evaluate the effectiveness of our concept embedding models on the dataless classification task as an extrinsic evaluation. We demonstrate through empirical results the efficiency and effectiveness of our proposed BOC densification scheme in obtaining better classification results compared to the original sparse BOC representation.",
"We use the 20 Newsgroups dataset (20NG) BIBREF17 which is commonly used for benchmarking text classification algorithms. The dataset contains 20 categories each has INLINEFORM0 1000 news posts. We obtained the BOC representations using ESA from song2014dataless who utilized a Wikipedia index containing pages with 100+ words and 5+ outgoing links to create ESA mappings of 500 dimensions for both the categories and news posts of the 20NG. We designed two types of classification tasks: 1) fine-grained classification involving closely related classes such as Hockey vs. Baseball, Autos vs. Motorcycles, and Guns vs. Mideast vs. Misc, and 2) coarse-grained classification involving top-level categories such as Sport vs. Politics and Sport vs. Religion. The top-level categories are created by combining instances of the fine-grained categories as shown in Table 2.",
"We compare our models with three prior methods:",
"ESA which computes the cosine similarity using the sparse BOC vectors.",
"WE INLINEFORM0 & WE INLINEFORM1 which were proposed by songunsupervised for BOC densification using embeddings obtained from Word2Vec. As the authors reported, we fix the minimum similarity threshold to 0.85. WE INLINEFORM2 finds the best match for each concept, while WE INLINEFORM3 utilizes the Hungarian algorithm to find the best concept-concept alignment on one-to-one basis. Both mechanisms have polynomial time complexity.",
"Table 3 presents the results of fine-grained dataless classification measured in micro-averaged F1. As we can notice, ESA achieves its peak performance with a few hundred dimensions of the sparse BOC vector. Using our densification mechanism, both the CRC & 3C models achieve equal performance to ESA at much less dimensions. Densification using the CRC model embeddings gives the best F1 scores on the three tasks. Interestingly, the CRC model improves the F1 score by INLINEFORM0 7% using only 14 concepts on Autos vs. Motorcycles, and by INLINEFORM1 3% using 70 concepts on Guns vs. Mideast vs. Misc. The 3C model, still performs better than ESA on 2 out of the 3 tasks. Both WE INLINEFORM2 and WE INLINEFORM3 improve the performance over ESA but not as our CRC model.",
"In order to better illustrate the robustness of our densification mechanism when varying the # of BOC dimensions, we measured F1 scores of each task as a function of the # of BOC dimensions used for densification. As we see in Figure 1, with one concept we can achieve high F1 scores compared to ESA which achieves zero or very low F1. Moreover, near-peak performance is achievable with the top 50 or less dimensions. We can also notice that, as we increase the # of dimensions, both WE INLINEFORM0 and WE INLINEFORM1 densification methods have the same undesired monotonic pattern like ESA. Actually, the imposed threshold by these methods does not allow for full dense representation of the BOC vector and therefore at low dimensions we still see low overall F1 score. Our proposed densification mechanisms besides their low cost, produce fully densified representations allowing good similarities at low dimensions.",
"Results of coarse-grained classification are presented in Table 4. Classification at the top level is easier than the fine-grained level. Nevertheless, as with fine-grained classification, ESA still peaks with a few hundred dimensions of the sparse BOC vector. Both the CRC & 3C models achieve equal performance to ESA at very few dimensions ( INLINEFORM0 ). Densification using the CRC model embeddings still performs the best on both tasks. Interestingly, the 3C model gives very close F1 scores to the CRC model at less dimensions (@4 with Sport vs. Politics, and @60 with Sport vs. Religion) indicating its competitive advantage when computational cost is a decisive criteria. The 3C model, still performs better than ESA, WE INLINEFORM1 , and WE INLINEFORM2 on both tasks.",
"Figure 2 shows F1 scores of coarse-grained classification when varying the # of BOC dimensions used for densification. The same pattern of achieving near-peak performance at very few dimensions recur with the CRC & 3C models. ESA using the sparse BOC vectors achieves low F1 up until few hundred dimensions are considered. Even with the costly WE INLINEFORM0 and WE INLINEFORM1 densifications, performance sometimes decreases."
],
[
"In this paper we proposed two models for learning concept embeddings based on the skip-gram model. We also proposed an efficient and effective mechanism for BOC densification which outperformed the prior proposed densification schemes on dataless classification. Unlike these prior densification mechanisms, our method scales linearly with the # of the BOC dimensions. In addition, we demonstrated through the results how this efficient mechanism allows generating high quality dense BOC vectors from few concepts alleviating the need of obtaining hundreds of concepts when generating the concept vector."
]
],
"section_name": [
"Introduction",
"Related Work",
"Concept Embeddings for BOC Densification",
"Concept Raw Context Model (CRC)",
"Concept-Concept Context Model (3C)",
"Training",
"BOC Densification",
"Entity Semantic Relatedness",
"Dataless Classification",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"2db669ccf2871ee1e9f92f7c3711d40de5ba5b04",
"d007ddb567ebf52a87d18f39f8229d1a4d0f6b67"
],
"answer": [
{
"evidence": [
"We use a benchmark dataset created by ceccarelli2013learning from the CoNLL 2003 data. As in previous studies BIBREF16 , BIBREF15 , we model measuring entity relatedness as a ranking problem. We use the test split of the dataset to create 3,314 queries. Each query has a query entity and INLINEFORM0 91 response entities labeled as related or unrelated. The quality is measured by the ability of the system to rank related entities on top of unrelated ones."
],
"extractive_spans": [
"a benchmark dataset created by ceccarelli2013learning from the CoNLL 2003 data"
],
"free_form_answer": "",
"highlighted_evidence": [
"We use a benchmark dataset created by ceccarelli2013learning from the CoNLL 2003 data. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We use a benchmark dataset created by ceccarelli2013learning from the CoNLL 2003 data. As in previous studies BIBREF16 , BIBREF15 , we model measuring entity relatedness as a ranking problem. We use the test split of the dataset to create 3,314 queries. Each query has a query entity and INLINEFORM0 91 response entities labeled as related or unrelated. The quality is measured by the ability of the system to rank related entities on top of unrelated ones."
],
"extractive_spans": [
"dataset created by ceccarelli2013learning from the CoNLL 2003 data"
],
"free_form_answer": "",
"highlighted_evidence": [
"We use a benchmark dataset created by ceccarelli2013learning from the CoNLL 2003 data."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"731a9bd402c75087d34531aaf2a5e5a0775df999",
"75f402828d97eda9f69b2c0d251098ae3df84cac"
],
"answer": [
{
"evidence": [
"In this paper we propose two neural embedding models in order to learn continuous concept vectors based on the skip-gram model BIBREF11 . Our first model is the Concept Raw Context model (CRC) which utilizes concept mentions in a large scale KB to jointly learn embeddings of both words and concepts. Our second model is the Concept-Concept Context model (3C) which learns the embeddings of concepts from their conceptual contexts (i.e., contexts containing surrounding concepts only). After learning the concept vectors, we propose an efficient concept vector aggregation method to generate fully dense BOC representations. Our efficient aggregation method allows measuring the similarity between pairs of BOC vectors in linear time. This is more efficient than prior methods which require quadratic time or at least log-linear time if optimized (see equation 2)."
],
"extractive_spans": [
"Our first model is the Concept Raw Context model (CRC) which utilizes concept mentions in a large scale KB to jointly learn embeddings of both words and concepts. Our second model is the Concept-Concept Context model (3C) which learns the embeddings of concepts from their conceptual contexts (i.e., contexts containing surrounding concepts only)"
],
"free_form_answer": "",
"highlighted_evidence": [
"In this paper we propose two neural embedding models in order to learn continuous concept vectors based on the skip-gram model BIBREF11 . Our first model is the Concept Raw Context model (CRC) which utilizes concept mentions in a large scale KB to jointly learn embeddings of both words and concepts. Our second model is the Concept-Concept Context model (3C) which learns the embeddings of concepts from their conceptual contexts (i.e., contexts containing surrounding concepts only). "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In this paper we propose two neural embedding models in order to learn continuous concept vectors based on the skip-gram model BIBREF11 . Our first model is the Concept Raw Context model (CRC) which utilizes concept mentions in a large scale KB to jointly learn embeddings of both words and concepts. Our second model is the Concept-Concept Context model (3C) which learns the embeddings of concepts from their conceptual contexts (i.e., contexts containing surrounding concepts only). After learning the concept vectors, we propose an efficient concept vector aggregation method to generate fully dense BOC representations. Our efficient aggregation method allows measuring the similarity between pairs of BOC vectors in linear time. This is more efficient than prior methods which require quadratic time or at least log-linear time if optimized (see equation 2)."
],
"extractive_spans": [
"Concept Raw Context model",
"Concept-Concept Context model"
],
"free_form_answer": "",
"highlighted_evidence": [
"Our first model is the Concept Raw Context model (CRC) which utilizes concept mentions in a large scale KB to jointly learn embeddings of both words and concepts. Our second model is the Concept-Concept Context model (3C) which learns the embeddings of concepts from their conceptual contexts (i.e., contexts containing surrounding concepts only)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"a9f497c42f6ac0f46ab8df5149059db70f31c1f8",
"deeb4d9017ec4b09c5ee1769781c010a1fd50e88"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 5 Accuracy of concept categorization"
],
"extractive_spans": [],
"free_form_answer": "the CRX model",
"highlighted_evidence": [
"FLOAT SELECTED: Table 5 Accuracy of concept categorization"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Table 3 presents the results of fine-grained dataless classification measured in micro-averaged F1. As we can notice, ESA achieves its peak performance with a few hundred dimensions of the sparse BOC vector. Using our densification mechanism, both the CRC & 3C models achieve equal performance to ESA at much less dimensions. Densification using the CRC model embeddings gives the best F1 scores on the three tasks. Interestingly, the CRC model improves the F1 score by INLINEFORM0 7% using only 14 concepts on Autos vs. Motorcycles, and by INLINEFORM1 3% using 70 concepts on Guns vs. Mideast vs. Misc. The 3C model, still performs better than ESA on 2 out of the 3 tasks. Both WE INLINEFORM2 and WE INLINEFORM3 improve the performance over ESA but not as our CRC model."
],
"extractive_spans": [
"3C model"
],
"free_form_answer": "",
"highlighted_evidence": [
"The 3C model, still performs better than ESA on 2 out of the 3 tasks."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"6803565248ab66cfd0ccf75276abd58228ef5bd7"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 8 Evaluation results of dataless document classification of coarse-grained classes measured in micro-averaged F1 along with # of dimensions (concepts) at which corresponding performance is achieved"
],
"extractive_spans": [],
"free_form_answer": "The number of dimensions can be reduced by up to 212 times.",
"highlighted_evidence": [
"FLOAT SELECTED: Table 8 Evaluation results of dataless document classification of coarse-grained classes measured in micro-averaged F1 along with # of dimensions (concepts) at which corresponding performance is achieved"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"What is the benchmark dataset?",
"What are the two neural embedding models?",
"which neural embedding model works better?",
"What is the degree of dimension reduction of the efficient aggregation method?"
],
"question_id": [
"f4496316ddd35ee2f0ccc6475d73a66abf87b611",
"e8a32460fba149003566969f92ab5dd94a8754a4",
"2a6003a74d051d0ebbe62e8883533a5f5e55078b",
"1b1b0c71f1a4b37c6562d444f75c92eb2c727d9b"
],
"question_writer": [
"9cf96ca8b584b5de948019dc75e305c9e7707b92",
"9cf96ca8b584b5de948019dc75e305c9e7707b92",
"9cf96ca8b584b5de948019dc75e305c9e7707b92",
"9cf96ca8b584b5de948019dc75e305c9e7707b92"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Table 1 Top-3 concepts generated using ESA [6] for two 20-newsgroups categories (Hockey and Guns) along with top-3 concepts of sample instances",
"Fig. 1 Densification of the bag-of-concepts vector using weighted average of the learned concept embeddings. The concept space is defined by all Wikipedia article titles. The concept vector is created from the top n hits of searching a Wikipedia inverted index with the given text. The weights are the TF-IDF scores from searching Wikipedia",
"Table 2 Example three sentences along with sample contexts generated from CRX and CCX. Contexts are generated with a context window of length 3",
"Table 3 Evaluation of concept embeddings for measuring entity semantic relatedness using Spearman rankorder correlation (ρ)",
"Table 4 Top-3 rated entities from CRX & CCX models on sample entities from the 4 domains compared to the ground truth",
"Table 5 Accuracy of concept categorization",
"Table 6 20NG category mappings Top-level Low-level",
"Table 7 Evaluation results of dataless document classification of fine-grained classes measured in microaveraged F1 alongwith # of dimensions (concepts) in the BoC at which corresponding performance is achieved",
"Fig. 2 Micro-averaged F1 scores of fine-grained classes when varying the # of BoC dimensions (color figure online)",
"Table 8 Evaluation results of dataless document classification of coarse-grained classes measured in micro-averaged F1 along with # of dimensions (concepts) at which corresponding performance is achieved",
"Fig. 3 Micro-averaged F1 scores of coarse-grained classes when varying the # of concepts (dimensions) in the BoC from 1 to 500 (color figure online)",
"Table 9 Evaluation results of dataless document classification of coarse-grained classes versus supervised classification with SVM measured in micro-averaged F1 along with # of dimensions (concepts) for CRX & CCX or % of labeled samples for SVM at which corresponding performance is achieved",
"Table 10 Top-5 related concepts from CRX & CCX models for sample target concepts"
],
"file": [
"3-Table1-1.png",
"7-Figure1-1.png",
"9-Table2-1.png",
"14-Table3-1.png",
"15-Table4-1.png",
"16-Table5-1.png",
"17-Table6-1.png",
"18-Table7-1.png",
"19-Figure2-1.png",
"19-Table8-1.png",
"20-Figure3-1.png",
"20-Table9-1.png",
"21-Table10-1.png"
]
} | [
"which neural embedding model works better?",
"What is the degree of dimension reduction of the efficient aggregation method?"
] | [
[
"1702.03342-Dataless Classification-5",
"1702.03342-16-Table5-1.png"
],
[
"1702.03342-19-Table8-1.png"
]
] | [
"the CRX model",
"The number of dimensions can be reduced by up to 212 times."
] | 159 |
1805.03710 | Incorporating Subword Information into Matrix Factorization Word Embeddings | The positive effect of adding subword information to word embeddings has been demonstrated for predictive models. In this paper we investigate whether similar benefits can also be derived from incorporating subwords into counting models. We evaluate the impact of different types of subwords (n-grams and unsupervised morphemes), with results confirming the importance of subword information in learning representations of rare and out-of-vocabulary words. | {
"paragraphs": [
[
"Low dimensional word representations (embeddings) have become a key component in modern NLP systems for language modeling, parsing, sentiment classification, and many others. These embeddings are usually derived by employing the distributional hypothesis: that similar words appear in similar contexts BIBREF0 .",
"The models that perform the word embedding can be divided into two classes: predictive, which learn a target or context word distribution, and counting, which use a raw, weighted, or factored word-context co-occurrence matrix BIBREF1 . The most well-known predictive model, which has become eponymous with word embedding, is word2vec BIBREF2 . Popular counting models include PPMI-SVD BIBREF3 , GloVe BIBREF4 , and LexVec BIBREF5 .",
"These models all learn word-level representations, which presents two main problems: 1) Learned information is not explicitly shared among the representations as each word has an independent vector. 2) There is no clear way to represent out-of-vocabulary (OOV) words.",
"fastText BIBREF6 addresses these issues in the Skip-gram word2vec model by representing a word by the sum of a unique vector and a set of shared character n-grams (from hereon simply referred to as n-grams) vectors. This addresses both issues above as learned information is shared through the n-gram vectors and from these OOV word representations can be constructed.",
"In this paper we propose incorporating subword information into counting models using a strategy similar to fastText.",
"We use LexVec as the counting model as it generally outperforms PPMI-SVD and GloVe on intrinsic and extrinsic evaluations BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , but the method proposed here should transfer to GloVe unchanged.",
"The LexVec objective is modified such that a word's vector is the sum of all its subword vectors.",
"We compare 1) the use of n-gram subwords, like fastText, and 2) unsupervised morphemes identified using Morfessor BIBREF11 to learn whether more linguistically motivated subwords offer any advantage over simple n-grams.",
"To evaluate the impact subword information has on in-vocabulary (IV) word representations, we run intrinsic evaluations consisting of word similarity and word analogy tasks. The incorporation of subword information results in similar gains (and losses) to that of fastText over Skip-gram. Whereas incorporating n-gram subwords tends to capture more syntactic information, unsupervised morphemes better preserve semantics while also improving syntactic results. Given that intrinsic performance can correlate poorly with performance on downstream tasks BIBREF12 , we also conduct evaluation using the VecEval suite of tasks BIBREF13 , in which",
"all subword models, including fastText, show no significant improvement over word-level models.",
"We verify the model's ability to represent OOV words by quantitatively evaluating nearest-neighbors. Results show that, like fastText, both LexVec n-gram and (to a lesser degree) unsupervised morpheme models give coherent answers.",
"This paper discusses related word ( $§$ \"Related Work\" ), introduces the subword LexVec model ( $§$ \"Subword LexVec\" ), describes experiments ( $§$ \"Materials\" ), analyzes results ( $§$ \"Results\" ), and concludes with ideas for future works ( $§$ \"Conclusion and Future Work\" )."
],
[
"Word embeddings that leverage subword information were first introduced by BIBREF14 which represented a word of as the sum of four-gram vectors obtained running an SVD of a four-gram to four-gram co-occurrence matrix. Our model differs by learning the subword vectors and resulting representation jointly as weighted factorization of a word-context co-occurrence matrix is performed.",
"There are many models that use character-level subword information to form word representations BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 , BIBREF19 , as well as fastText (the model on which we base our work). Closely related are models that use morphological segmentation in learning word representations BIBREF20 , BIBREF21 , BIBREF22 , BIBREF23 , BIBREF24 , BIBREF25 . Our model also uses n-grams and morphological segmentation, but it performs explicit matrix factorization to learn subword and word representations, unlike these related models which mostly use neural networks.",
"Finally, BIBREF26 and BIBREF27 retrofit morphological information onto pre-trained models. These differ from our work in that we incorporate morphological information at training time, and that only BIBREF26 is able to generate embeddings for OOV words."
],
[
"The LexVec BIBREF7 model factorizes the PPMI-weighted word-context co-occurrence matrix using stochastic gradient descent. ",
"$$PPMI_{wc} = max(0, \\log \\frac{M_{wc} \\; M_{**}}{ M_{w*} \\; M_{*c} })$$ (Eq. 3) ",
"where $M$ is the word-context co-occurrence matrix constructed by sliding a window of fixed size centered over every target word",
" $w$ in the subsampled BIBREF2 training corpus and incrementing cell $M_{wc}$ for every context word $c$ appearing within this window (forming a $(w,c)$ pair). LexVec adjusts the PPMI matrix using context distribution smoothing BIBREF3 .",
"With the PPMI matrix calculated, the sliding window process is repeated and the following loss functions are minimized for every observed $(w,c)$ pair and target word $w$ : ",
"$$L_{wc} &= \\frac{1}{2} (u_w^\\top v_c - PPMI_{wc})^2 \\\\\n\nL_{w} &= \\frac{1}{2} \\sum \\limits _{i=1}^k{\\mathbf {E}_{c_i \\sim P_n(c)} (u_w^\\top v_{c_i} - PPMI_{wc_i})^2 }$$ (Eq. 4) ",
" where $u_w$ and $v_c$ are $d$ -dimensional word and context vectors. The second loss function describes how, for each target word, $k$ negative samples BIBREF2 are drawn from the smoothed context unigram distribution.",
"Given a set of subwords $S_w$ for a word $w$ , we follow fastText and replace $u_w$ in eq:lexvec2,eq:lexvec3 by $u^{\\prime }_w$ such that: ",
"$$u^{\\prime }_w = \\frac{1}{|S_w| + 1} (u_w + \\sum _{s \\in S_w} q_{hash(s)})$$ (Eq. 5) ",
" such that a word is the sum of its word vector and its $d$ -dimensional subword vectors $q_x$ . The number of possible subwords is very large so the function $hash(s)$ hashes a subword to the interval $[1, buckets]$ . For OOV words, ",
"$$u^{\\prime }_w = \\frac{1}{|S_w|} \\sum _{s \\in S_w} q_{hash(s)}$$ (Eq. 7) ",
"We compare two types of subwords: simple n-grams (like fastText) and unsupervised morphemes. For example, given the word “cat”, we mark beginning and end with angled brackets and use all n-grams of length 3 to 6 as subwords, yielding $S_{\\textnormal {cat}} = \\lbrace \\textnormal {$ $ ca, at$ $, cat} \\rbrace $ . Morfessor BIBREF11 is used to probabilistically segment words into morphemes. The Morfessor model is trained using raw text so it is entirely unsupervised. For the word “subsequent”, we get $S_{\\textnormal {subsequent}} = \\lbrace \\textnormal {$ $ sub, sequent$ $} \\rbrace $ ."
],
[
"Our experiments aim to measure if the incorporation of subword information into LexVec results in similar improvements as observed in moving from Skip-gram to fastText, and whether unsupervised morphemes offer any advantage over n-grams. For IV words, we perform intrinsic evaluation via word similarity and word analogy tasks, as well as downstream tasks. OOV word representation is tested through qualitative nearest-neighbor analysis.",
"All models are trained using a 2015 dump of Wikipedia, lowercased and using only alphanumeric characters. Vocabulary is limited to words that appear at least 100 times for a total of 303517 words. Morfessor is trained on this vocabulary list.",
"We train the standard LexVec (LV), LexVec using n-grams (LV-N), and LexVec using unsupervised morphemes (LV-M) using the same hyper-parameters as BIBREF7 ( $\\textnormal {window} = 2$ , $\\textnormal {initial learning rate} = .025$ , $\\textnormal {subsampling} = 10^{-5}$ , $\\textnormal {negative samples} = 5$ , $\\textnormal {context distribution smoothing} = .75$ , $\\textnormal {positional contexts} = \\textnormal {True}$ ).",
"Both Skip-gram (SG) and fastText (FT) are trained using the reference implementation of fastText with the hyper-parameters given by BIBREF6 ( $\\textnormal {window} = 5$ , $\\textnormal {initial learning rate} = .025$ , $\\textnormal {subsampling} = 10^{-4}$ , $\\textnormal {negative samples} = 5$ ).",
"All five models are run for 5 iterations over the training corpus and generate 300 dimensional word representations. LV-N, LV-M, and FT use 2000000 buckets when hashing subwords.",
"For word similarity evaluations, we use the WordSim-353 Similarity (WS-Sim) and Relatedness (WS-Rel) BIBREF28 and SimLex-999 (SimLex) BIBREF29 datasets, and the Rare Word (RW) BIBREF20 dataset to verify if subword information improves rare word representation. Relationships are measured using the Google semantic (GSem) and syntactic (GSyn) analogies BIBREF2 and the Microsoft syntactic analogies (MSR) dataset BIBREF30 .",
"We also evaluate all five models on downstream tasks from the VecEval suite BIBREF13 , using only the tasks for which training and evaluation data is freely available: chunking, sentiment and question classification, and natural language identification (NLI). The default settings from the suite are used, but we run only the fixed settings, where the embeddings themselves are not tunable parameters of the models, forcing the system to use only the information already in the embeddings.",
"Finally, we use LV-N, LV-M, and FT to generate OOV word representations for the following words: 1) “hellooo”: a greeting commonly used in instant messaging which emphasizes a syllable. 2) “marvelicious”: a made-up word obtained by merging “marvelous” and “delicious”. 3) “louisana”: a misspelling of the proper name “Louisiana”. 4) “rereread”: recursive use of prefix “re”. 5) “tuzread”: made-up prefix “tuz”."
],
[
"Results for IV evaluation are shown in tab:intrinsic, and for OOV in tab:oov.",
"Like in FT, the use of subword information in both LV-N and LV-M results in 1) better representation of rare words, as evidenced by the increase in RW correlation, and 2) significant improvement on the GSyn and MSR tasks, in evidence of subwords encoding information about a word's syntactic function (the suffix “ly”, for example, suggests an adverb).",
"There seems to a trade-off between capturing semantics and syntax as in both LV-N and FT there is an accompanying decrease on the GSem tasks in exchange for gains on the GSyn and MSR tasks. Morphological segmentation in LV-M appears to favor syntax less strongly than do simple n-grams.",
"On the downstream tasks, we only observe statistically significant ( $p < .05$ under a random permutation test) improvement on the chunking task, and it is a very small gain. We attribute this to both regular and subword models having very similar quality on frequent IV word representation. Statistically, these are the words are that are most likely to appear in the downstream task instances, and so the superior representation of rare words",
"has, due to their nature, little impact on overall accuracy. Because in all tasks OOV words are mapped to the “ $\\langle $ unk $\\rangle $ ” token, the subword models are not being used to the fullest, and in future work we will investigate whether generating representations for all words improves task performance.",
"In OOV representation (tab:oov), LV-N and FT work almost identically, as is to be expected. Both find highly coherent neighbors for the words “hellooo”, “marvelicious”, and “rereread”. Interestingly, the misspelling of “louisana” leads to coherent name-like neighbors, although none is the expected correct spelling “louisiana”. All models stumble on the made-up prefix “tuz”. A possible fix would be to down-weigh very rare subwords in the vector summation. LV-M is less robust than LV-N and FT on this task as it is highly sensitive to incorrect segmentation, exemplified in the “hellooo” example.",
"Finally, we see that nearest-neighbors are a mixture of similarly pre/suffixed words. If these pre/suffixes are semantic, the neighbors are semantically related, else if syntactic they have similar syntactic function. This suggests that it should be possible to get tunable representations which are more driven by semantics or syntax by a weighted summation of subword vectors, given we can identify whether a pre/suffix is semantic or syntactic in nature and weigh them accordingly. This might be possible without supervision using corpus statistics as syntactic subwords are likely to be more frequent, and so could be down-weighted for more semantic representations. This is something we will pursue in future work."
],
[
"In this paper, we incorporated subword information (simple n-grams and unsupervised morphemes) into the LexVec word embedding model and evaluated its impact on the resulting IV and OOV word vectors. Like fastText, subword LexVec learns better representations for rare words than its word-level counterpart. All models generated coherent representations for OOV words, with simple n-grams demonstrating more robustness than unsupervised morphemes. In future work, we will verify whether using OOV representations in downstream tasks improves performance. We will also explore the trade-off between semantics and syntax when subword information is used."
]
],
"section_name": [
"Introduction",
"Related Work",
"Subword LexVec",
"Materials",
"Results",
"Conclusion and Future Work"
]
} | {
"answers": [
{
"annotation_id": [
"80c6c03fd4c6d425d09130f5655b37b20ad8e7e1",
"c514bbfe1898b1a3f1830e8f4480dbd27bb1972b"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 2: We generate vectors for OOV using subword information and search for the nearest (cosine distance) words in the embedding space. The LV-M segmentation for each word is: {〈hell, o, o, o〉}, {〈marvel, i, cious〉}, {〈louis, ana〉}, {〈re, re, read〉}, {〈 tu, z, read〉}. We omit the LV-N and FT n-grams as they are trivial and too numerous to list."
],
"extractive_spans": [],
"free_form_answer": "English",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: We generate vectors for OOV using subword information and search for the nearest (cosine distance) words in the embedding space. The LV-M segmentation for each word is: {〈hell, o, o, o〉}, {〈marvel, i, cious〉}, {〈louis, ana〉}, {〈re, re, read〉}, {〈 tu, z, read〉}. We omit the LV-N and FT n-grams as they are trivial and too numerous to list."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"2edf02f10b265025ab688a697339ac2016a6a4c9"
],
"answer": [
{
"evidence": [
"We also evaluate all five models on downstream tasks from the VecEval suite BIBREF13 , using only the tasks for which training and evaluation data is freely available: chunking, sentiment and question classification, and natural language identification (NLI). The default settings from the suite are used, but we run only the fixed settings, where the embeddings themselves are not tunable parameters of the models, forcing the system to use only the information already in the embeddings.",
"Finally, we use LV-N, LV-M, and FT to generate OOV word representations for the following words: 1) “hellooo”: a greeting commonly used in instant messaging which emphasizes a syllable. 2) “marvelicious”: a made-up word obtained by merging “marvelous” and “delicious”. 3) “louisana”: a misspelling of the proper name “Louisiana”. 4) “rereread”: recursive use of prefix “re”. 5) “tuzread”: made-up prefix “tuz”."
],
"extractive_spans": [
"We also evaluate all five models on downstream tasks from the VecEval suite BIBREF13 , using only the tasks for which training and evaluation data is freely available: chunking, sentiment and question classification, and natural language identification (NLI). The default settings from the suite are used, but we run only the fixed settings, where the embeddings themselves are not tunable parameters of the models, forcing the system to use only the information already in the embeddings."
],
"free_form_answer": "",
"highlighted_evidence": [
"We also evaluate all five models on downstream tasks from the VecEval suite BIBREF13 , using only the tasks for which training and evaluation data is freely available: chunking, sentiment and question classification, and natural language identification (NLI). The default settings from the suite are used, but we run only the fixed settings, where the embeddings themselves are not tunable parameters of the models, forcing the system to use only the information already in the embeddings.\n\nFinally, we use LV-N, LV-M, and FT to generate OOV word representations for the following words: 1) “hellooo”: a greeting commonly used in instant messaging which emphasizes a syllable. 2) “marvelicious”: a made-up word obtained by merging “marvelous” and “delicious”. 3) “louisana”: a misspelling of the proper name “Louisiana”. 4) “rereread”: recursive use of prefix “re”. 5) “tuzread”: made-up prefix “tuz”."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"78e6d9604657fc8297ca12287ef5dd26ec98920a",
"aa9090778e7bae7a43558276d8344e0547ed809d"
],
"answer": [
{
"evidence": [
"We compare 1) the use of n-gram subwords, like fastText, and 2) unsupervised morphemes identified using Morfessor BIBREF11 to learn whether more linguistically motivated subwords offer any advantage over simple n-grams."
],
"extractive_spans": [
"n-gram subwords",
"unsupervised morphemes identified using Morfessor BIBREF11 to learn whether more linguistically motivated subwords "
],
"free_form_answer": "",
"highlighted_evidence": [
"INLINEFORM0 in the subsampled BIBREF2 training corpus and incrementing cell INLINEFORM1 for every context word INLINEFORM2 appearing within this window (forming a INLINEFORM3 pair). LexVec adjusts the PPMI matrix using context distribution smoothing BIBREF3 .",
"We compare 1) the use of n-gram subwords, like fastText, and 2) unsupervised morphemes identified using Morfessor BIBREF11 to learn whether more linguistically motivated subwords offer any advantage over simple n-grams."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We compare two types of subwords: simple n-grams (like fastText) and unsupervised morphemes. For example, given the word “cat”, we mark beginning and end with angled brackets and use all n-grams of length 3 to 6 as subwords, yielding $S_{\\textnormal {cat}} = \\lbrace \\textnormal {$ $ ca, at$ $, cat} \\rbrace $ . Morfessor BIBREF11 is used to probabilistically segment words into morphemes. The Morfessor model is trained using raw text so it is entirely unsupervised. For the word “subsequent”, we get $S_{\\textnormal {subsequent}} = \\lbrace \\textnormal {$ $ sub, sequent$ $} \\rbrace $ ."
],
"extractive_spans": [
"simple n-grams (like fastText) and unsupervised morphemes"
],
"free_form_answer": "",
"highlighted_evidence": [
"We compare two types of subwords: simple n-grams (like fastText) and unsupervised morphemes. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"946a2c8efd9c654586558f64877b4ab732406a56",
"c4c97b35a10f43635dbdb3b12a43aa433ea46095"
],
"answer": [
{
"evidence": [
"Word embeddings that leverage subword information were first introduced by BIBREF14 which represented a word of as the sum of four-gram vectors obtained running an SVD of a four-gram to four-gram co-occurrence matrix. Our model differs by learning the subword vectors and resulting representation jointly as weighted factorization of a word-context co-occurrence matrix is performed."
],
"extractive_spans": [
"weighted factorization of a word-context co-occurrence matrix "
],
"free_form_answer": "",
"highlighted_evidence": [
"Our model differs by learning the subword vectors and resulting representation jointly as weighted factorization of a word-context co-occurrence matrix is performed."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [
"The LexVec BIBREF7"
],
"free_form_answer": "",
"highlighted_evidence": [
"The LexVec BIBREF7 model factorizes the PPMI-weighted word-context co-occurrence matrix using stochastic gradient descent. DISPLAYFORM0"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"For which languages do they build word embeddings for?",
"How do they evaluate their resulting word embeddings?",
"What types of subwords do they incorporate in their model?",
"Which matrix factorization methods do they use?"
],
"question_id": [
"9c44df7503720709eac933a15569e5761b378046",
"b7381927764536bd97b099b6a172708125364954",
"df95b3cb6aa0187655fd4856ae2b1f503d533583",
"f7ed3b9ed469ed34f46acde86b8a066c52ecf430"
],
"question_writer": [
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Table 1: Word similarity (Spearman’s rho), analogy (% accuracy), and downstream task (% accuracy) results. In downstream tasks, for the same model accuracy varies over different runs, so we report the mean over 20 runs, in which the only significantly (p < .05 under a random permutation test) different result is in chunking.",
"Table 2: We generate vectors for OOV using subword information and search for the nearest (cosine distance) words in the embedding space. The LV-M segmentation for each word is: {〈hell, o, o, o〉}, {〈marvel, i, cious〉}, {〈louis, ana〉}, {〈re, re, read〉}, {〈 tu, z, read〉}. We omit the LV-N and FT n-grams as they are trivial and too numerous to list."
],
"file": [
"3-Table1-1.png",
"4-Table2-1.png"
]
} | [
"For which languages do they build word embeddings for?"
] | [
[
"1805.03710-4-Table2-1.png"
]
] | [
"English"
] | 160 |
1807.07279 | Imparting Interpretability to Word Embeddings while Preserving Semantic Structure | As an ubiquitous method in natural language processing, word embeddings are extensively employed to map semantic properties of words into a dense vector representation. They capture semantic and syntactic relations among words but the vector corresponding to the words are only meaningful relative to each other. Neither the vector nor its dimensions have any absolute, interpretable meaning. We introduce an additive modification to the objective function of the embedding learning algorithm that encourages the embedding vectors of words that are semantically related to a predefined concept to take larger values along a specified dimension, while leaving the original semantic learning mechanism mostly unaffected. In other words, we align words that are already determined to be related, along predefined concepts. Therefore, we impart interpretability to the word embedding by assigning meaning to its vector dimensions. The predefined concepts are derived from an external lexical resource, which in this paper is chosen as Roget's Thesaurus. We observe that alignment along the chosen concepts is not limited to words in the Thesaurus and extends to other related words as well. We quantify the extent of interpretability and assignment of meaning from our experimental results. We also demonstrate the preservation of semantic coherence of the resulting vector space by using word-analogy and word-similarity tests. These tests show that the interpretability-imparted word embeddings that are obtained by the proposed framework do not sacrifice performances in common benchmark tests. | {
"paragraphs": [
[
"Distributed word representations, commonly referred to as word embeddings BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , serve as elementary building blocks in the course of algorithm design for an expanding range of applications in natural language processing (NLP), including named entity recognition BIBREF4 , BIBREF5 , parsing BIBREF6 , sentiment analysis BIBREF7 , BIBREF8 , and word-sense disambiguation BIBREF9 . Although the empirical utility of word embeddings as an unsupervised method for capturing the semantic or syntactic features of a certain word as it is used in a given lexical resource is well-established BIBREF10 , BIBREF11 , BIBREF12 , an understanding of what these features mean remains an open problem BIBREF13 , BIBREF14 and as such word embeddings mostly remain a black box. It is desirable to be able to develop insight into this black box and be able to interpret what it means, while retaining the utility of word embeddings as semantically-rich intermediate representations. Other than the intrinsic value of this insight, this would not only allow us to explain and understand how algorithms work BIBREF15 , but also set a ground that would facilitate the design of new algorithms in a more deliberate way.",
"Recent approaches to generating word embeddings (e.g. BIBREF0 , BIBREF2 ) are rooted linguistically in the field of distributed semantics BIBREF16 , where words are taken to assume meaning mainly by their degree of interaction (or lack thereof) with other words in the lexicon BIBREF17 , BIBREF18 . Under this paradigm, dense, continuous vector representations are learned in an unsupervised manner from a large corpus, using the word cooccurrence statistics directly or indirectly, and such an approach is shown to result in vector representations that mathematically capture various semantic and syntactic relations between words BIBREF0 , BIBREF2 , BIBREF3 . However, the dense nature of the learned embeddings obfuscate the distinct concepts encoded in the different dimensions, which renders the resulting vectors virtually uninterpretable. The learned embeddings make sense only in relation to each other and their specific dimensions do not carry explicit information that can be interpreted. However, being able to interpret a word embedding would illuminate the semantic concepts implicitly represented along the various dimensions of the embedding, and reveal its hidden semantic structures.",
"In the literature, researchers tackled interpretability problem of the word embeddings using different approaches. Several researchers BIBREF19 , BIBREF20 , BIBREF21 proposed algorithms based on non-negative matrix factorization (NMF) applied to cooccurrence variant matrices. Other researchers suggested to obtain interpretable word vectors from existing uninterpretable word vectors by applying sparse coding BIBREF22 , BIBREF23 , by training a sparse auto-encoder to transform the embedding space BIBREF24 , by rotating the original embeddings BIBREF25 , BIBREF26 or by applying transformations based on external semantic datasets BIBREF27 .",
"Although the above-mentioned approaches provide better interpretability that is measured using a particular method such as word intrusion test, usually the improved interpretability comes with a cost of performance in the benchmark tests such as word similarity or word analogy. One possible explanation for this performance decrease is that the proposed transformations from the original embedding space distort the underlying semantic structure constructed by the original embedding algorithm. Therefore, it can be claimed that a method that learns dense and interpretable word embeddings without inflicting any damage to the underlying semantic learning mechanism is the key to achieve both high performing and interpretable word embeddings.",
"Especially after the introduction of the word2vec algorithm by Mikolov BIBREF0 , BIBREF1 , there has been a growing interest in algorithms that generate improved word representations under some performance metric. Significant effort is spent on appropriately modifying the objective functions of the algorithms in order to incorporate knowledge from external resources, with the purpose of increasing the performance of the resulting word representations BIBREF28 , BIBREF29 , BIBREF30 , BIBREF31 , BIBREF32 , BIBREF33 , BIBREF34 , BIBREF35 , BIBREF36 , BIBREF37 . Inspired by the line of work reported in these studies, we propose to use modified objective functions for a different purpose: learning more interpretable dense word embeddings. By doing this, we aim to incorporate semantic information from an external lexical resource into the word embedding so that the embedding dimensions are aligned along predefined concepts. This alignment is achieved by introducing a modification to the embedding learning process. In our proposed method, which is built on top of the GloVe algorithm BIBREF2 , the cost function for any one of the words of concept word-groups is modified by the introduction of an additive term to the cost function. Each embedding vector dimension is first associated with a concept. For a word belonging to any one of the word-groups representing these concepts, the modified cost term favors an increase for the value of this word's embedding vector dimension corresponding to the concept that the particular word belongs to. For words that do not belong to any one of the word-groups, the cost term is left untouched. Specifically, Roget's Thesaurus BIBREF38 , BIBREF39 is used to derive the concepts and concept word-groups to be used as the external lexical resource for our proposed method. We quantitatively demonstrate the increase in interpretability by using the measure given in BIBREF27 , BIBREF40 as well as demonstrating qualitative results. We also show that the semantic structure of the original embedding has not been harmed in the process since there is no performance loss with standard word-similarity or word-analogy tests.",
"The paper is organized as follows. In Section SECREF2 , we discuss previous studies related to our work under two main categories: interpretability of word embeddings and joint-learning frameworks where the objective function is modified. In Section SECREF3 , we present the problem framework and provide the formulation within the GloVe BIBREF2 algorithm setting. In Section SECREF4 where our approach is proposed, we motivate and develop a modification to the original objective function with the aim of increasing representation interpretability. In Section SECREF5 , experimental results are provided and the proposed method is quantitatively and qualitatively evaluated. Additionally, in Section SECREF5 , results demonstrating the extent to which the original semantic structure of the embedding space is affected are presented by using word-analogy and word-similarity tests. We conclude the paper in Section SECREF6 ."
],
[
"Methodologically, our work is related to prior studies that aim to obtain “improved” word embeddings using external lexical resources, under some performance metric. Previous work in this area can be divided into two main categories: works that i) modify the word embedding learning algorithm to incorporate lexical information, ii) operate on pre-trained embeddings with a post-processing step.",
"Among works that follow the first approach, BIBREF28 extend the Skip-Gram model by incorporating the word similarity relations extracted from the Paraphrase Database (PPDB) and WordNet BIBREF29 , into the Skip-Gram predictive model as an additional cost term. In BIBREF30 , the authors extend the CBOW model by considering two types of semantic information, termed relational and categorical, to be incorporated into the embeddings during training. For the former type of semantic information, the authors propose the learning of explicit vectors for the different relations extracted from a semantic lexicon such that the word pairs that satisfy the same relation are distributed more homogeneously. For the latter, the authors modify the learning objective such that some weighted average distance is minimized for words under the same semantic category. In BIBREF31 , the authors represent the synonymy and hypernymy-hyponymy relations in terms of inequality constraints, where the pairwise similarity rankings over word triplets are forced to follow an order extracted from a lexical resource. Following their extraction from WordNet, the authors impose these constraints in the form of an additive cost term to the Skip-Gram formulation. Finally, BIBREF32 builds on top of the GloVe algorithm by introducing a regularization term to the objective function that encourages the vector representations of similar words as dictated by WordNet to be similar as well.",
"Turning our attention to the post-processing approach for enriching word embeddings with external lexical knowledge, BIBREF33 has introduced the retrofitting algorithm that acts on pre-trained embeddings such as Skip-Gram or GloVe. The authors propose an objective function that aims to balance out the semantic information captured in the pre-trained embeddings with the constraints derived from lexical resources such as WordNet, PPDB and FrameNet. One of the models proposed in BIBREF34 extends the retrofitting approach to incorporate the word sense information from WordNet. Similarly, BIBREF35 creates multi-sense embeddings by gathering the word sense information from a lexical resource and learning to decompose the pre-trained embeddings into a convex combination of sense embeddings. In BIBREF36 , the authors focus on improving word embeddings for capturing word similarity, as opposed to mere relatedness. To this end, they introduce the counter-fitting technique which acts on the input word vectors such that synonymous words are attracted to one another whereas antonymous words are repelled, where the synonymy-antonymy relations are extracted from a lexical resource. More recently, the ATTRACT-REPEL algorithm proposed by BIBREF37 improves on counter-fitting by a formulation which imparts the word vectors with external lexical information in mini-batches.",
"Most of the studies discussed above ( BIBREF30 , BIBREF31 , BIBREF32 , BIBREF33 , BIBREF34 , BIBREF36 , BIBREF37 ) report performance improvements in benchmark tests such as word similarity or word analogy, while BIBREF29 uses a different analysis method (mean reciprocal rank). In sum, the literature is rich with studies aiming to obtain word embeddings that perform better under specific performance metrics. However, less attention has been directed to the issue of interpretability of the word embeddings. In the literature, the problem of interpretability has been tackled using different approaches. BIBREF19 proposed non-negative matrix factorization (NMF) for learning sparse, interpretable word vectors from co-occurrence variant matrices where the resulting vector space is called non-negative sparse embeddigns (NNSE). However, since NMF methods require maintaining a global matrix for learning, they suffer from memory and scale issue. This problem has been addressed in BIBREF20 where an online method of learning interpretable word embeddings from corpora using a modified version of skip-gram model BIBREF0 is proposed. As a different approach, BIBREF21 combined text-based similarity information among words with brain activity based similarity information to improve interpretability using joint non-negative sparse embedding (JNNSE).",
"A common alternative approach for learning interpretable embeddings is to learn transformations that map pre-trained state-of-the-art embeddings to new interpretable semantic spaces. To obtain sparse, higher dimensional and more interpretable vector spaces, BIBREF22 and BIBREF23 use sparse coding on conventional dense word embeddings. However, these methods learn the projection vectors that are used for the transformation from the word embeddings without supervision. For this reason, labels describing the corresponding semantic categories cannot be provided. An alternative approach was proposed in BIBREF25 , where orthogonal transformations were utilized to increase interpretability while preserving the performance of the underlying embedding. However, BIBREF25 has also shown that total interpretability of an embedding is kept constant under any orthogonal transformation and it can only be redistributed across the dimensions. Rotation algorithms based on exploratory factor analysis (EFA) to preserve the performance of the original word embeddings while improving their interpretability was proposed in BIBREF26 . BIBREF24 proposed to deploy a sparse auto-encoder using pre-trained dense word embeddings to improve interpretability. More detailed investigation of semantic structure and interpretability of word embeddings can be found in BIBREF27 , where a metric was proposed to quantitatively measure the degree of interpretability already present in the embedding vector spaces.",
"Previous works on interpretability mentioned above, except BIBREF21 , BIBREF27 and our proposed method, do not need external resources, utilization of which has both advantages and disadvantages. Methods that do not use external resources require fewer resources but they also lack the aid of information extracted from these resources."
],
[
"For the task of unsupervised word embedding extraction, we operate on a discrete collection of lexical units (words) INLINEFORM0 that is part of an input corpus INLINEFORM1 , with number of tokens INLINEFORM2 , sourced from a vocabulary INLINEFORM3 of size INLINEFORM4 . In the setting of distributional semantics, the objective of a word embedding algorithm is to maximize some aggregate utility over the entire corpus so that some measure of “closeness” is maximized for pairs of vector representations INLINEFORM14 for words which, on the average, appear in proximity to one another. In the GloVe algorithm BIBREF2 , which we base our improvements upon, the following objective function is considered: DISPLAYFORM0 ",
"In ( EQREF6 ), INLINEFORM0 and INLINEFORM1 stand for word and context vector representations, respectively, for words INLINEFORM2 and INLINEFORM3 , while INLINEFORM4 represents the (possibly weighted) cooccurrence count for the word pair INLINEFORM5 . Intuitively, ( EQREF6 ) represents the requirement that if some word INLINEFORM6 occurs often enough in the context (or vicinity) of another word INLINEFORM7 , then the corresponding word representations should have a large enough inner product in keeping with their large INLINEFORM8 value, up to some bias terms INLINEFORM9 ; and vice versa. INLINEFORM10 in ( EQREF6 ) is used as a discounting factor that prohibits rare cooccurrences from disproportionately influencing the resulting embeddings.",
"The objective ( EQREF6 ) is minimized using stochastic gradient descent by iterating over the matrix of cooccurrence records INLINEFORM0 . In the GloVe algorithm, for a given word INLINEFORM1 , the final word representation is taken to be the average of the two intermediate vector representations obtained from ( EQREF6 ); i.e, INLINEFORM2 . In the next section, we detail the enhancements made to ( EQREF6 ) for the purposes of enhanced interpretability, using the aforementioned framework as our basis."
],
[
"Our approach falls into a joint-learning framework where the distributional information extracted from the corpus is allowed to fuse with the external lexicon-based information. Word-groups extracted from Roget's Thesaurus are directly mapped to individual dimensions of word embeddings. Specifically, the vector representations of words that belong to a particular group are encouraged to have deliberately increased values in a particular dimension that corresponds to the word-group under consideration. This can be achieved by modifying the objective function of the embedding algorithm to partially influence vector representation distributions across their dimensions over an input vocabulary. To do this, we propose the following modification to the GloVe objective in ( EQREF6 ): rCl J = i,j=1V f(Xij)[ (wiTwj + bi + bj -Xij)2",
" + k(l=1D INLINEFORM0 iFl g(wi,l) + l=1D INLINEFORM1 j Fl g(wj,l) ) ]. In ( SECREF4 ), INLINEFORM2 denotes the indices for the elements of the INLINEFORM3 th concept word-group which we wish to assign in the vector dimension INLINEFORM4 . The objective ( SECREF4 ) is designed as a mixture of two individual cost terms: the original GloVe cost term along with a second term that encourages embedding vectors of a given concept word-group to achieve deliberately increased values along an associated dimension INLINEFORM5 . The relative weight of the second term is controlled by the parameter INLINEFORM6 . The simultaneous minimization of both objectives ensures that words that are similar to, but not included in, one of these concept word-groups are also “nudged” towards the associated dimension INLINEFORM7 . The trained word vectors are thus encouraged to form a distribution where the individual vector dimensions align with certain semantic concepts represented by a collection of concept word-groups, one assigned to each vector dimension. To facilitate this behaviour, ( SECREF4 ) introduces a monotone decreasing function INLINEFORM8 defined as INLINEFORM9 ",
"which serves to increase the total cost incurred if the value of the INLINEFORM0 th dimension for the two vector representations INLINEFORM1 and INLINEFORM2 for a concept word INLINEFORM3 with INLINEFORM4 fails to be large enough. INLINEFORM5 is also shown in Fig. FIGREF7 .",
"The objective ( SECREF4 ) is minimized using stochastic gradient descent over the cooccurrence records INLINEFORM0 . Intuitively, the terms added to ( SECREF4 ) in comparison with ( EQREF6 ) introduce the effect of selectively applying a positive step-type input to the original descent updates of ( EQREF6 ) for concept words along their respective vector dimensions, which influences the dimension value in the positive direction. The parameter INLINEFORM1 in ( SECREF4 ) allows for the adjustment of the magnitude of this influence as needed.",
"In the next section, we demonstrate the feasibility of this approach by experiments with an example collection of concept word-groups extracted from Roget's Thesaurus."
],
[
"We first identified 300 concepts, one for each dimension of the 300-dimensional vector representation, by employing Roget's Thesaurus. This thesaurus follows a tree structure which starts with a Root node that contains all the words and phrases in the thesaurus. The root node is successively split into Classes and Sections, which are then (optionally) split into Subsections of various depths, finally ending in Categories, which constitute the smallest unit of word/phrase collections in the structure. The actual words and phrases descend from these Categories, and make up the leaves of the tree structure. We note that a given word typically appears in multiple categories corresponding to the different senses of the word. We constructed concept word-groups from Roget's Thesaurus as follows: We first filtered out the multi-word phrases and the relatively obscure terms from the thesaurus. The obscure terms were identified by checking them against a vocabulary extracted from Wikipedia. We then obtained 300 word-groups as the result of a partitioning operation applied to the subtree that ends with categories as its leaves. The partition boundaries, hence the resulting word-groups, can be chosen in many different ways. In our proposed approach, we have chosen to determine this partitioning by traversing this tree structure from the root node in breadth-first order, and by employing a parameter INLINEFORM0 for the maximum size of a node. Here, the size of a node is defined as the number of unique words that ever-descend from that node. During the traversal, if the size of a given node is less than this threshold, we designate the words that ultimately descend from that node as a concept word-group. Otherwise, if the node has children, we discard the node, and queue up all its children for further consideration. If this node does not have any children, on the other hand, the node is truncated to INLINEFORM1 elements with the highest frequency-ranks, and the resulting words are designated as a concept word-group. We note that the choice of INLINEFORM2 greatly affects the resulting collection of word-groups: Excessively large values result in few word-groups that greatly overlap with one another, while overly small values result in numerous tiny word-groups that fail to adequately represent a concept. We experimentally determined that a INLINEFORM3 value of 452 results in the most healthy number of relatively large word-groups (113 groups with size INLINEFORM4 100), while yielding a preferably small overlap amongst the resulting word-groups (with average overlap size not exceeding 3 words). A total of 566 word-groups were thus obtained. 259 smallest word-groups (with size INLINEFORM5 38) were discarded to bring down the number of word-groups to 307. Out of these, 7 groups with the lowest median frequency-rank were further discarded, which yields the final 300 concept word-groups used in the experiments. We present some of the resulting word-groups in Table TABREF9 .",
"By using the concept word-groups, we have trained the GloVe algorithm with the proposed modification given in Section SECREF4 on a snapshot of English Wikipedia measuring 8GB in size, with the stop-words filtered out. Using the parameters given in Table TABREF10 , this resulted in a vocabulary size of 287,847. For the weighting parameter in Eq. SECREF4 , we used a value of INLINEFORM0 . The algorithm was trained over 20 iterations. The GloVe algorithm without any modifications was also trained as a baseline with the same parameters. In addition to the original GloVe algorithm, we compare our proposed method with previous studies that aim to obtain interpretable word vectors. We train the improved projected gradient model proposed in BIBREF20 to obtain word vectors (called OIWE-IPG) using the same corpus we use to train GloVe and our proposed method. Using the methods proposed in BIBREF23 , BIBREF26 , BIBREF24 on our baseline GloVe embeddings, we obtain SOV, SPINE and Parsimax (orthogonal) word representations, respectively. We train all the models with the proposed parameters. However, in BIBREF26 , the authors show results for a relatively small vocabulary of 15,000 words. When we trained their model on our baseline GloVe embeddings with a large vocabulary of size 287,847, the resulting vectors performed significantly poor on word similarity tasks compared to the results presented in their paper. In addition, Parsimax (orthogonal) word vectors obtained using method in BIBREF26 are nearly identical to the baseline vectors (i.e. learned orthogonal transformation matrix is very close to identity). Therefore, Parsimax (orthogonal) yields almost same results with baseline vectors in all evaluations. We evaluate the interpretability of the resulting embeddings qualitatively and quantitatively. We also test the performance of the embeddings on word similarity and word analogy tests.",
"In our experiments, vocabulary size is close to 300,000 while only 16,242 unique words of the vocabulary are present in the concept groups. Furthermore, only dimensions that correspond to the concept group of the word will be updated due to the additional cost term. Given that these concept words can belong to multiple concept groups (2 on average), only 33,319 parameters are updated. There are 90 million individual parameters present for the 300,000 word vectors of size 300. Of these parameters, only approximately 33,000 are updated by the additional cost term."
],
[
"In Fig. FIGREF13 , we demonstrate the particular way in which the proposed algorithm ( SECREF4 ) influences the vector representation distributions. Specifically, we consider, for illustration, the 32nd dimension values for the original GloVe algorithm and our modified version, restricting the plots to the top-1000 words with respect to their frequency ranks for clarity of presentation. In Fig. FIGREF13 , the words in the horizontal axis are sorted in descending order with respect to the values at the 32nd dimension of their word embedding vectors coming from the original GloVe algorithm. The dimension values are denoted with blue and red/green markers for the original and the proposed algorithms, respectively. Additionally, the top-50 words that achieve the greatest 32nd dimension values among the considered 1000 words are emphasized with enlarged markers, along with text annotations. In the presented simulation of the proposed algorithm, the 32nd dimension values are encoded with the concept JUDGMENT, which is reflected as an increase in the dimension values for words such as committee, academy, and article. We note that these words (red) are not part of the pre-determined word-group for the concept JUDGMENT, in contrast to words such as award, review and account (green) which are. This implies that the increase in the corresponding dimension values seen for these words is attributable to the joint effect of the first term in ( SECREF4 ) which is inherited from the original GloVe algorithm, in conjunction with the remaining terms in the proposed objective expression ( SECREF4 ). This experiment illustrates that the proposed algorithm is able to impart the concept of JUDGMENT on its designated vector dimension above and beyond the supplied list of words belonging to the concept word-group for that dimension. We also present the list of words with the greatest dimension value for the dimensions 11, 13, 16, 31, 36, 39, 41, 43 and 79 in Table TABREF11 . These dimensions are aligned/imparted with the concepts that are given in the column headers. In Table TABREF11 , the words that are highlighted with green denote the words that exist in the corresponding word-group obtained from Roget's Thesaurus (and are thus explicitly forced to achieve increased dimension values), while the red words denote the words that achieve increased dimension values by virtue of their cooccurrence statistics with the thesaurus-based words (indirectly, without being explicitly forced). This again illustrates that a semantic concept can indeed be coded to a vector dimension provided that a sensible lexical resource is used to guide semantically related words to the desired vector dimension via the proposed objective function in ( SECREF4 ). Even the words that do not appear in, but are semantically related to, the word-groups that we formed using Roget's Thesaurus, are indirectly affected by the proposed algorithm. They also reflect the associated concepts at their respective dimensions even though the objective functions for their particular vectors are not modified. This point cannot be overemphasized. Although the word-groups extracted from Roget's Thesaurus impose a degree of supervision to the process, the fact that the remaining words in the entire vocabulary are also indirectly affected makes the proposed method a semi-supervised approach that can handle words that are not in these chosen word-groups. A qualitative example of this result can be seen in the last column of Table TABREF11 . It is interesting to note the appearance of words such as guerilla, insurgency, mujahideen, Wehrmacht and Luftwaffe in addition to the more obvious and straightforward army, soldiers and troops, all of which are not present in the associated word-group WARFARE.",
"Most of the dimensions we investigated exhibit similar behaviour to the ones presented in Table TABREF11 . Thus generally speaking, we can say that the entries in Table TABREF11 are representative of the great majority. However, we have also specifically looked for dimensions that make less sense and determined a few such dimensions which are relatively less satisfactory. These less satisfactory examples are given in Table TABREF14 . These examples are also interesting in that they shed insight into the limitations posed by polysemy and existence of very rare outlier words."
],
[
"One of the main goals of this study is to improve the interpretability of dense word embeddings by aligning the dimensions with predefined concepts from a suitable lexicon. A quantitative measure is required to reliably evaluate the achieved improvement. One of the methods proposed to measure the interpretability is the word intrusion test BIBREF41 . But, this method is expensive to apply since it requires evaluations from multiple human evaluators for each embedding dimension. In this study, we use a semantic category-based approach based on the method and category dataset (SEMCAT) introduced in BIBREF27 to quantify interpretability. Specifically, we apply a modified version of the approach presented in BIBREF40 in order to consider possible sub-groupings within the categories in SEMCAT. Interpretability scores are calculated using Interpretability Score (IS) as given below:",
" DISPLAYFORM0 ",
"In ( EQREF17 ), INLINEFORM0 and INLINEFORM1 represents the interpretability scores in the positive and negative directions of the INLINEFORM2 dimension ( INLINEFORM3 , INLINEFORM4 number of dimensions in the embedding space) of word embedding space for the INLINEFORM5 category ( INLINEFORM6 , INLINEFORM7 is number of categories in SEMCAT, INLINEFORM8 ) in SEMCAT respectively. INLINEFORM9 is the set of words in the INLINEFORM10 category in SEMCAT and INLINEFORM11 is the number of words in INLINEFORM12 . INLINEFORM13 corresponds to the minimum number of words required to construct a semantic category (i.e. represent a concept). INLINEFORM14 represents the set of INLINEFORM15 words that have the highest ( INLINEFORM16 ) and lowest ( INLINEFORM17 ) values in INLINEFORM18 dimension of the embedding space. INLINEFORM19 is the intersection operator and INLINEFORM20 is the cardinality operator (number of elements) for the intersecting set. In ( EQREF17 ), INLINEFORM21 gives the interpretability score for the INLINEFORM22 dimension and INLINEFORM23 gives the average interpretability score of the embedding space.",
"Fig. FIGREF18 presents the measured average interpretability scores across dimensions for original GloVe embeddings, for the proposed method and for the other four methods we compare, along with a randomly generated embedding. Results are calculated for the parameters INLINEFORM0 and INLINEFORM1 . Our proposed method significantly improves the interpretability for all INLINEFORM2 compared to the original GloVe approach. Our proposed method is second to only SPINE in increasing interpretability. However, as we will experimentally demonstrate in the next subsection, in doing this, SPINE almost entirely destroys the underlying semantic structure of the word embeddings, which is the primary function of a word embedding.",
"The proposed method and interpretability measurements are both based on utilizing concepts represented by word-groups. Therefore it is expected that there will be higher interpretability scores for some of the dimensions for which the imparted concepts are also contained in SEMCAT. However, by design, word groups that they use are formed by using different sources and are independent. Interpretability measurements use SEMCAT while our proposed method utilizes Roget's Thesaurus."
],
[
"It is necessary to show that the semantic structure of the original embedding has not been damaged or distorted as a result of aligning the dimensions with given concepts, and that there is no substantial sacrifice involved from the performance that can be obtained with the original GloVe. To check this, we evaluate performances of the proposed embeddings on word similarity BIBREF42 and word analogy BIBREF0 tests. We compare the results with the original embeddings and the three alternatives excluding Parsimax BIBREF26 since orthogonal transformations will not affect the performance of the original embeddings on these tests.",
"Word similarity test measures the correlation between word similarity scores obtained from human evaluation (i.e. true similarities) and from word embeddings (usually using cosine similarity). In other words, this test quantifies how well the embedding space reflects human judgements in terms of similarities between different words. The correlation scores for 13 different similarity test sets are reported in Table TABREF20 . We observe that, let alone a reduction in performance, the obtained scores indicate an almost uniform improvement in the correlation values for the proposed algorithm, outperforming all the alternatives in almost all test sets. Categories from Roget's thesaurus are groupings of words that are similar in some sense which the original embedding algorithm may fail to capture. These test results signify that the semantic information injected into the algorithm by the additional cost term is significant enough to result in a measurable improvement. It should also be noted that scores obtained by SPINE is unacceptably low on almost all tests indicating that it has achieved its interpretability performance at the cost of losing its semantic functions.",
"Word analogy test is introduced in BIBREF1 and looks for the answers of the questions that are in the form of \"X is to Y, what Z is to ?\" by applying simple arithmetic operations to vectors of words X, Y and Z. We present precision scores for the word analogy tests in Table TABREF21 . It can be seen that the alternative approaches that aim to improve interpretability, have poor performance on the word analogy tests. However, our proposed method has comparable performance with the original GloVe embeddings. Our method outperforms GloVe in semantic analogy test set and in overall results, while GloVe performs slightly better in syntactic test set. This comparable performance is mainly due to the cost function of our proposed method that includes the original objective of the GloVe.",
"To investigate the effect of the additional cost term on the performance improvement in the semantic analogy test, we present Table TABREF22 . In particular, we present results for the cases where i) all questions in the dataset are considered, ii) only the questions that contains at least one concept word are considered, iii) only the questions that consist entirely of concept words are considered. We note specifically that for the last case, only a subset of the questions under the semantic category family.txt ended up being included. We observe that for all three scenarios, our proposed algorithm results in an improvement in the precision scores. However, the greatest performance increase is seen for the last scenario, which underscores the extent to which the semantic features captured by embeddings can be improved with a reasonable selection of the lexical resource from which the concept word-groups were derived."
],
[
"We presented a novel approach to impart interpretability into word embeddings. We achieved this by encouraging different dimensions of the vector representation to align with predefined concepts, through the addition of an additional cost term in the optimization objective of the GloVe algorithm that favors a selective increase for a pre-specified input of concept words along each dimension.",
"We demonstrated the efficacy of this approach by applying qualitative and quantitative evaluations for interpretability. We also showed via standard word-analogy and word-similarity tests that the semantic coherence of the original vector space is preserved, even slightly improved. We have also performed and reported quantitative comparisons with several other methods for both interpretabilty increase and preservation of semantic coherence. Upon inspection of Fig. FIGREF18 and Tables TABREF20 , TABREF21 , and TABREF22 altogether, it should be noted that our proposed method achieves both of the objectives simultaneously, increased interpretability and preservation of the intrinsic semantic structure.",
"An important point was that, while it is expected for words that are already included in the concept word-groups to be aligned together since their dimensions are directly updated with the proposed cost term, it was also observed that words not in these groups also aligned in a meaningful manner without any direct modification to their cost function. This indicates that the cost term we added works productively with the original cost function of GloVe to handle words that are not included in the original concept word-groups, but are semantically related to those word-groups. The underlying mechanism can be explained as follows. While the outside lexical resource we introduce contains a relatively small number of words compared to the total number of words, these words and the categories they represent have been carefully chosen and in a sense, \"densely span\" all the words in the language. By saying \"span\", we mean they cover most of the concepts and ideas in the language without leaving too many uncovered areas. With \"densely\" we mean all areas are covered with sufficient strength. In other words, this subset of words is able to constitute a sufficiently strong skeleton, or scaffold. Now remember that GloVe works to align or bring closer related groups of words, which will include words from the lexical source. So the joint action of aligning the words with the predefined categories (introduced by us) and aligning related words (handled by GloVe) allows words not in the lexical groups to also be aligned meaningfully. We may say that the non-included words are \"pulled along\" with the included words by virtue of the \"strings\" or \"glue\" that is provided by GloVe. In numbers, the desired effect is achieved by manipulating less than only 0.05% of parameters of the entire word vectors. Thus, while there is a degree of supervision coming from the external lexical resource, the rest of the vocabulary is also aligned indirectly in an unsupervised way. This may be the reason why, unlike earlier proposed approaches, our method is able to achieve increasing interpretability without destroying underlying semantic structure, and consequently without sacrificing performance in benchmark tests.",
"Upon inspecting the 2nd column of Table TABREF14 , where qualitative results for concept TASTE are presented, another insight regarding the learning mechanism of our proposed approach can be made. Here it seems understandable that our proposed approach, along with GloVe, brought together the words taste and polish, and then the words Polish and, for instance, Warsaw are brought together by GloVe. These examples are interesting in that they shed insight into how GloVe works and the limitations posed by polysemy. It should be underlined that the present approach is not totally incapable of handling polysemy, but cannot do so perfectly. Since related words are being clustered, sufficiently well-connected words that do not meaningfully belong along with others will be appropriately \"pulled away\" from that group by several words, against the less effective, inappropriate pull of a particular word. Even though polish with lowercase \"p\" belongs where it is, it is attracting Warsaw to itself through polysemy and this is not meaningful. Perhaps because Warsaw is not a sufficiently well-connected word, it ends being dragged along, although words with greater connectedness to a concept group might have better resisted such inappropriate attractions.",
"In this study, we used the GloVe algorithm as the underlying dense word embedding scheme to demonstrate our approach. However, we stress that it is possible for our approach to be extended to other word embedding algorithms which have a learning routine consisting of iterations over cooccurrence records, by making suitable adjustments in the objective function. Since word2vec model is also based on the coocurrences of words in a sliding window through a large corpus, we expect that our approach can also be applied to word2vec after making suitable adjustments, which can be considered as an immediate future work for our approach. Although the semantic concepts are encoded in only one direction (positive) within the embedding dimensions, it might be beneficial to pursue future work that also encodes opposite concepts, such as good and bad, in two opposite directions of the same dimension.",
"The proposed methodology can also be helpful in computational cross-lingual studies, where the similarities are explored across the vector spaces of different languages BIBREF43 , BIBREF44 ."
]
],
"section_name": [
"Introduction",
"Related Work",
"Problem Description",
"Imparting Interpretability",
"Experiments and Results",
"Qualitative Evaluation for Interpretability",
"Quantitative Evaluation for Interpretability",
"Intrinsic Evaluation of the Embeddings",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"334b951e0b2eaa13598cb059e8ef8ac735658cad",
"ce39feccd99fc024d3608450468c4a2cdec1e1e5"
],
"answer": [
{
"evidence": [
"Especially after the introduction of the word2vec algorithm by Mikolov BIBREF0 , BIBREF1 , there has been a growing interest in algorithms that generate improved word representations under some performance metric. Significant effort is spent on appropriately modifying the objective functions of the algorithms in order to incorporate knowledge from external resources, with the purpose of increasing the performance of the resulting word representations BIBREF28 , BIBREF29 , BIBREF30 , BIBREF31 , BIBREF32 , BIBREF33 , BIBREF34 , BIBREF35 , BIBREF36 , BIBREF37 . Inspired by the line of work reported in these studies, we propose to use modified objective functions for a different purpose: learning more interpretable dense word embeddings. By doing this, we aim to incorporate semantic information from an external lexical resource into the word embedding so that the embedding dimensions are aligned along predefined concepts. This alignment is achieved by introducing a modification to the embedding learning process. In our proposed method, which is built on top of the GloVe algorithm BIBREF2 , the cost function for any one of the words of concept word-groups is modified by the introduction of an additive term to the cost function. Each embedding vector dimension is first associated with a concept. For a word belonging to any one of the word-groups representing these concepts, the modified cost term favors an increase for the value of this word's embedding vector dimension corresponding to the concept that the particular word belongs to. For words that do not belong to any one of the word-groups, the cost term is left untouched. Specifically, Roget's Thesaurus BIBREF38 , BIBREF39 is used to derive the concepts and concept word-groups to be used as the external lexical resource for our proposed method. We quantitatively demonstrate the increase in interpretability by using the measure given in BIBREF27 , BIBREF40 as well as demonstrating qualitative results. We also show that the semantic structure of the original embedding has not been harmed in the process since there is no performance loss with standard word-similarity or word-analogy tests.",
"By using the concept word-groups, we have trained the GloVe algorithm with the proposed modification given in Section SECREF4 on a snapshot of English Wikipedia measuring 8GB in size, with the stop-words filtered out. Using the parameters given in Table TABREF10 , this resulted in a vocabulary size of 287,847. For the weighting parameter in Eq. SECREF4 , we used a value of INLINEFORM0 . The algorithm was trained over 20 iterations. The GloVe algorithm without any modifications was also trained as a baseline with the same parameters. In addition to the original GloVe algorithm, we compare our proposed method with previous studies that aim to obtain interpretable word vectors. We train the improved projected gradient model proposed in BIBREF20 to obtain word vectors (called OIWE-IPG) using the same corpus we use to train GloVe and our proposed method. Using the methods proposed in BIBREF23 , BIBREF26 , BIBREF24 on our baseline GloVe embeddings, we obtain SOV, SPINE and Parsimax (orthogonal) word representations, respectively. We train all the models with the proposed parameters. However, in BIBREF26 , the authors show results for a relatively small vocabulary of 15,000 words. When we trained their model on our baseline GloVe embeddings with a large vocabulary of size 287,847, the resulting vectors performed significantly poor on word similarity tasks compared to the results presented in their paper. In addition, Parsimax (orthogonal) word vectors obtained using method in BIBREF26 are nearly identical to the baseline vectors (i.e. learned orthogonal transformation matrix is very close to identity). Therefore, Parsimax (orthogonal) yields almost same results with baseline vectors in all evaluations. We evaluate the interpretability of the resulting embeddings qualitatively and quantitatively. We also test the performance of the embeddings on word similarity and word analogy tests."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Specifically, Roget's Thesaurus BIBREF38 , BIBREF39 is used to derive the concepts and concept word-groups to be used as the external lexical resource for our proposed method.",
"By using the concept word-groups, we have trained the GloVe algorithm with the proposed modification given in Section SECREF4 on a snapshot of English Wikipedia measuring 8GB in size, with the stop-words filtered out."
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"4e5859c77fe9bc3b32156c7ea15d60a8d8eaa0f8",
"74e863f56156388e00908de30461fd917334c072"
],
"answer": [
{
"evidence": [
"One of the main goals of this study is to improve the interpretability of dense word embeddings by aligning the dimensions with predefined concepts from a suitable lexicon. A quantitative measure is required to reliably evaluate the achieved improvement. One of the methods proposed to measure the interpretability is the word intrusion test BIBREF41 . But, this method is expensive to apply since it requires evaluations from multiple human evaluators for each embedding dimension. In this study, we use a semantic category-based approach based on the method and category dataset (SEMCAT) introduced in BIBREF27 to quantify interpretability. Specifically, we apply a modified version of the approach presented in BIBREF40 in order to consider possible sub-groupings within the categories in SEMCAT. Interpretability scores are calculated using Interpretability Score (IS) as given below:"
],
"extractive_spans": [],
"free_form_answer": "Human evaluation for interpretability using the word intrusion test and automated evaluation for interpretability using a semantic category-based approach based on the method and category dataset (SEMCAT).",
"highlighted_evidence": [
"A quantitative measure is required to reliably evaluate the achieved improvement. One of the methods proposed to measure the interpretability is the word intrusion test BIBREF41 . But, this method is expensive to apply since it requires evaluations from multiple human evaluators for each embedding dimension. In this study, we use a semantic category-based approach based on the method and category dataset (SEMCAT) introduced in BIBREF27 to quantify interpretability. Specifically, we apply a modified version of the approach presented in BIBREF40 in order to consider possible sub-groupings within the categories in SEMCAT. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"One of the main goals of this study is to improve the interpretability of dense word embeddings by aligning the dimensions with predefined concepts from a suitable lexicon. A quantitative measure is required to reliably evaluate the achieved improvement. One of the methods proposed to measure the interpretability is the word intrusion test BIBREF41 . But, this method is expensive to apply since it requires evaluations from multiple human evaluators for each embedding dimension. In this study, we use a semantic category-based approach based on the method and category dataset (SEMCAT) introduced in BIBREF27 to quantify interpretability. Specifically, we apply a modified version of the approach presented in BIBREF40 in order to consider possible sub-groupings within the categories in SEMCAT. Interpretability scores are calculated using Interpretability Score (IS) as given below:"
],
"extractive_spans": [
"semantic category-based approach"
],
"free_form_answer": "",
"highlighted_evidence": [
"In this study, we use a semantic category-based approach based on the method and category dataset (SEMCAT) introduced in BIBREF27 to quantify interpretability."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"3a1326535ce929377b8e3b744e708286086f78c2"
],
"answer": [
{
"evidence": [
"Especially after the introduction of the word2vec algorithm by Mikolov BIBREF0 , BIBREF1 , there has been a growing interest in algorithms that generate improved word representations under some performance metric. Significant effort is spent on appropriately modifying the objective functions of the algorithms in order to incorporate knowledge from external resources, with the purpose of increasing the performance of the resulting word representations BIBREF28 , BIBREF29 , BIBREF30 , BIBREF31 , BIBREF32 , BIBREF33 , BIBREF34 , BIBREF35 , BIBREF36 , BIBREF37 . Inspired by the line of work reported in these studies, we propose to use modified objective functions for a different purpose: learning more interpretable dense word embeddings. By doing this, we aim to incorporate semantic information from an external lexical resource into the word embedding so that the embedding dimensions are aligned along predefined concepts. This alignment is achieved by introducing a modification to the embedding learning process. In our proposed method, which is built on top of the GloVe algorithm BIBREF2 , the cost function for any one of the words of concept word-groups is modified by the introduction of an additive term to the cost function. Each embedding vector dimension is first associated with a concept. For a word belonging to any one of the word-groups representing these concepts, the modified cost term favors an increase for the value of this word's embedding vector dimension corresponding to the concept that the particular word belongs to. For words that do not belong to any one of the word-groups, the cost term is left untouched. Specifically, Roget's Thesaurus BIBREF38 , BIBREF39 is used to derive the concepts and concept word-groups to be used as the external lexical resource for our proposed method. We quantitatively demonstrate the increase in interpretability by using the measure given in BIBREF27 , BIBREF40 as well as demonstrating qualitative results. We also show that the semantic structure of the original embedding has not been harmed in the process since there is no performance loss with standard word-similarity or word-analogy tests."
],
"extractive_spans": [
"dimension corresponding to the concept that the particular word belongs to"
],
"free_form_answer": "",
"highlighted_evidence": [
"For a word belonging to any one of the word-groups representing these concepts, the modified cost term favors an increase for the value of this word's embedding vector dimension corresponding to the concept that the particular word belongs to. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"8757f6f5dfa28f615936244cd1432b7caf89cabe",
"ecc3d5c34a6c7850d5a67c9c7bae18b867436e00"
],
"answer": [
{
"evidence": [
"Especially after the introduction of the word2vec algorithm by Mikolov BIBREF0 , BIBREF1 , there has been a growing interest in algorithms that generate improved word representations under some performance metric. Significant effort is spent on appropriately modifying the objective functions of the algorithms in order to incorporate knowledge from external resources, with the purpose of increasing the performance of the resulting word representations BIBREF28 , BIBREF29 , BIBREF30 , BIBREF31 , BIBREF32 , BIBREF33 , BIBREF34 , BIBREF35 , BIBREF36 , BIBREF37 . Inspired by the line of work reported in these studies, we propose to use modified objective functions for a different purpose: learning more interpretable dense word embeddings. By doing this, we aim to incorporate semantic information from an external lexical resource into the word embedding so that the embedding dimensions are aligned along predefined concepts. This alignment is achieved by introducing a modification to the embedding learning process. In our proposed method, which is built on top of the GloVe algorithm BIBREF2 , the cost function for any one of the words of concept word-groups is modified by the introduction of an additive term to the cost function. Each embedding vector dimension is first associated with a concept. For a word belonging to any one of the word-groups representing these concepts, the modified cost term favors an increase for the value of this word's embedding vector dimension corresponding to the concept that the particular word belongs to. For words that do not belong to any one of the word-groups, the cost term is left untouched. Specifically, Roget's Thesaurus BIBREF38 , BIBREF39 is used to derive the concepts and concept word-groups to be used as the external lexical resource for our proposed method. We quantitatively demonstrate the increase in interpretability by using the measure given in BIBREF27 , BIBREF40 as well as demonstrating qualitative results. We also show that the semantic structure of the original embedding has not been harmed in the process since there is no performance loss with standard word-similarity or word-analogy tests."
],
"extractive_spans": [],
"free_form_answer": "The cost function for any one of the words of concept word-groups is modified by the introduction of an additive term to the cost function. . Each embedding vector dimension is first associated with a concept. For a word belonging to any one of the word-groups representing these concepts, the modified cost term favors an increase for the value of this word's embedding vector dimension corresponding to the concept that the particular word belongs to,",
"highlighted_evidence": [
" Inspired by the line of work reported in these studies, we propose to use modified objective functions for a different purpose: learning more interpretable dense word embeddings. By doing this, we aim to incorporate semantic information from an external lexical resource into the word embedding so that the embedding dimensions are aligned along predefined concepts. This alignment is achieved by introducing a modification to the embedding learning process. In our proposed method, which is built on top of the GloVe algorithm BIBREF2 , the cost function for any one of the words of concept word-groups is modified by the introduction of an additive term to the cost function. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Especially after the introduction of the word2vec algorithm by Mikolov BIBREF0 , BIBREF1 , there has been a growing interest in algorithms that generate improved word representations under some performance metric. Significant effort is spent on appropriately modifying the objective functions of the algorithms in order to incorporate knowledge from external resources, with the purpose of increasing the performance of the resulting word representations BIBREF28 , BIBREF29 , BIBREF30 , BIBREF31 , BIBREF32 , BIBREF33 , BIBREF34 , BIBREF35 , BIBREF36 , BIBREF37 . Inspired by the line of work reported in these studies, we propose to use modified objective functions for a different purpose: learning more interpretable dense word embeddings. By doing this, we aim to incorporate semantic information from an external lexical resource into the word embedding so that the embedding dimensions are aligned along predefined concepts. This alignment is achieved by introducing a modification to the embedding learning process. In our proposed method, which is built on top of the GloVe algorithm BIBREF2 , the cost function for any one of the words of concept word-groups is modified by the introduction of an additive term to the cost function. Each embedding vector dimension is first associated with a concept. For a word belonging to any one of the word-groups representing these concepts, the modified cost term favors an increase for the value of this word's embedding vector dimension corresponding to the concept that the particular word belongs to. For words that do not belong to any one of the word-groups, the cost term is left untouched. Specifically, Roget's Thesaurus BIBREF38 , BIBREF39 is used to derive the concepts and concept word-groups to be used as the external lexical resource for our proposed method. We quantitatively demonstrate the increase in interpretability by using the measure given in BIBREF27 , BIBREF40 as well as demonstrating qualitative results. We also show that the semantic structure of the original embedding has not been harmed in the process since there is no performance loss with standard word-similarity or word-analogy tests."
],
"extractive_spans": [],
"free_form_answer": "An additive term added to the cost function for any one of the words of concept word-groups",
"highlighted_evidence": [
"In our proposed method, which is built on top of the GloVe algorithm BIBREF2 , the cost function for any one of the words of concept word-groups is modified by the introduction of an additive term to the cost function. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five"
],
"paper_read": [
"",
"",
"",
""
],
"question": [
"Do they report results only on English data?",
"What experiments do they use to quantify the extent of interpretability?",
"Along which dimension do the semantically related words take larger values?",
"What is the additive modification to the objective function?"
],
"question_id": [
"c7eb71683f53ab7acffd691a36cad6edc7f5522e",
"17a1eff7993c47c54eddc7344e7454fbe64191cd",
"a5e5cda1f6195ab1336855f1e39a609d61326d62",
"32d99dcd8d46e2cda04a9a9fa0e6693d2349a7a9"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
""
]
} | {
"caption": [
"Fig. 1. Function g in the additional cost term.",
"TABLE I SAMPLE CONCEPTS AND THEIR ASSOCIATED WORD-GROUPS FROM ROGET’S THESAURUS",
"TABLE II GLOVE PARAMETERS",
"TABLE III WORDS WITH LARGEST DIMENSION VALUES FOR THE PROPOSED ALGORITHM",
"Fig. 2. Most frequent 1000 words sorted according to their values in the 32nd dimension of the original GloVe embedding are shown with blue markers. Red and green markers show the values of the same words for the 32nd dimension of the embedding obtained with the proposed method where the dimension is aligned with the concept JUDGMENT. Words with green markers are contained in the concept JUDGMENT while words with red markers are not contained.",
"Fig. 3. Interpretability scores averaged over 300 dimensions for the original GloVe method, the proposed method, and four alternative methods along with a randomly generated baseline embedding for λ = 5. Please note that Parsimax method yields identical results with the GloVe baseline so that their plots coincide with each other (light blue).",
"TABLE IV WORDS WITH LARGEST DIMENSION VALUES FOR THE PROPOSED ALGORITHM - LESS SATISFACTORY EXAMPLES",
"TABLE V CORRELATIONS FOR WORD SIMILARITY TESTS",
"TABLE VI PRECISION SCORES FOR THE ANALOGY TEST",
"TABLE VII PRECISION SCORES FOR THE SEMANTIC ANALOGY TEST"
],
"file": [
"4-Figure1-1.png",
"5-TableI-1.png",
"5-TableII-1.png",
"6-TableIII-1.png",
"7-Figure2-1.png",
"8-Figure3-1.png",
"8-TableIV-1.png",
"8-TableV-1.png",
"9-TableVI-1.png",
"9-TableVII-1.png"
]
} | [
"What experiments do they use to quantify the extent of interpretability?",
"What is the additive modification to the objective function?"
] | [
[
"1807.07279-Quantitative Evaluation for Interpretability-0"
],
[
"1807.07279-Introduction-4"
]
] | [
"Human evaluation for interpretability using the word intrusion test and automated evaluation for interpretability using a semantic category-based approach based on the method and category dataset (SEMCAT).",
"An additive term added to the cost function for any one of the words of concept word-groups"
] | 161 |
1706.09673 | Improving Distributed Representations of Tweets - Present and Future | Unsupervised representation learning for tweets is an important research field which helps in solving several business applications such as sentiment analysis, hashtag prediction, paraphrase detection and microblog ranking. A good tweet representation learning model must handle the idiosyncratic nature of tweets which poses several challenges such as short length, informal words, unusual grammar and misspellings. However, there is a lack of prior work which surveys the representation learning models with a focus on tweets. In this work, we organize the models based on its objective function which aids the understanding of the literature. We also provide interesting future directions, which we believe are fruitful in advancing this field by building high-quality tweet representation learning models. | {
"paragraphs": [
[
"Twitter is a widely used microblogging platform, where users post and interact with messages, “tweets”. Understanding the semantic representation of tweets can benefit a plethora of applications such as sentiment analysis BIBREF0 , BIBREF1 , hashtag prediction BIBREF2 , paraphrase detection BIBREF3 and microblog ranking BIBREF4 , BIBREF5 . However, tweets are difficult to model as they pose several challenges such as short length, informal words, unusual grammar and misspellings. Recently, researchers are focusing on leveraging unsupervised representation learning methods based on neural networks to solve this problem. Once these representations are learned, we can use off-the-shelf predictors taking the representation as input to solve the downstream task BIBREF6 , BIBREF7 . These methods enjoy several advantages: (1) they are cheaper to train, as they work with unlabelled data, (2) they reduce the dependence on domain level experts, and (3) they are highly effective across multiple applications, in practice.",
"Despite this, there is a lack of prior work which surveys the tweet-specific unsupervised representation learning models. In this work, we attempt to fill this gap by investigating the models in an organized fashion. Specifically, we group the models based on the objective function it optimizes. We believe this work can aid the understanding of the existing literature. We conclude the paper by presenting interesting future research directions, which we believe are fruitful in advancing this field by building high-quality tweet representation learning models."
],
[
"There are various models spanning across different model architectures and objective functions in the literature to compute tweet representation in an unsupervised fashion. These models work in a semi-supervised way - the representations generated by the model is fed to an off-the-shelf predictor like Support Vector Machines (SVM) to solve a particular downstream task. These models span across a wide variety of neural network based architectures including average of word vectors, convolutional-based, recurrent-based and so on. We believe that the performance of these models is highly dependent on the objective function it optimizes – predicting adjacent word (within-tweet relationships), adjacent tweet (inter-tweet relationships), the tweet itself (autoencoder), modeling from structured resources like paraphrase databases and weak supervision. In this section, we provide the first of its kind survey of the recent tweet-specific unsupervised models in an organized fashion to understand the literature. Specifically, we categorize each model based on the optimized objective function as shown in Figure FIGREF1 . Next, we study each category one by one."
],
[
"Motivation: Every tweet is assumed to have a latent topic vector, which influences the distribution of the words in the tweet. For example, though the appearance of the phrase catch the ball is frequent in the corpus, if we know that the topic of a tweet is about “technology”, we can expect words such as bug or exception after the word catch (ignoring the) instead of the word ball since catch the bug/exception is more plausible under the topic “technology”. On the other hand, if the topic of the tweet is about “sports”, then we can expect ball after catch. These intuitions indicate that the prediction of neighboring words for a given word strongly relies on the tweet also.",
"Models: BIBREF8 's work is the first to exploit this idea to compute distributed document representations that are good at predicting words in the document. They propose two models: PV-DM and PV-DBOW, that are extensions of Continuous Bag Of Words (CBOW) and Skip-gram model variants of the popular Word2Vec model BIBREF9 respectively – PV-DM inserts an additional document token (which can be thought of as another word) which is shared across all contexts generated from the same document; PV-DBOW attempts to predict the sampled words from the document given the document representation. Although originally employed for paragraphs and documents, these models work better than the traditional models: BOW BIBREF10 and LDA BIBREF11 for tweet classification and microblog retrieval tasks BIBREF12 . The authors in BIBREF12 make the PV-DM and PV-DBOW models concept-aware (a rich semantic signal from a tweet) by augmenting two features: attention over contextual words and conceptual tweet embedding, which jointly exploit concept-level senses of tweets to compute better representations. Both the discussed works have the following characteristics: (1) they use a shallow architecture, which enables fast training, (2) computing representations for test tweets requires computing gradients, which is time-consuming for real-time Twitter applications, and (3) most importantly, they fail to exploit textual information from related tweets that can bear salient semantic signals."
],
[
"Motivation: To capture rich tweet semantics, researchers are attempting to exploit a type of sentence-level Distributional Hypothesis BIBREF10 , BIBREF13 . The idea is to infer the tweet representation from the content of adjacent tweets in a related stream like users' Twitter timeline, topical, retweet and conversational stream. This approach significantly alleviates the context insufficiency problem caused due to the ambiguous and short nature of tweets BIBREF0 , BIBREF14 .",
"Models: Skip-thought vectors BIBREF15 (STV) is a widely popular sentence encoder, which is trained to predict adjacent sentences in the book corpus BIBREF16 . Although the testing is cheap as it involves a cheap forward propagation of the test sentence, STV is very slow to train thanks to its complicated model architecture. To combat this computational inefficiency, FastSent BIBREF17 propose a simple additive (log-linear) sentence model, which predicts adjacent sentences (represented as BOW) taking the BOW representation of some sentence in context. This model can exploit the same signal, but at a much lower computational expense. Parallel to this work, Siamase CBOW BIBREF18 develop a model which directly compares the BOW representation of two sentence to bring the embeddings of a sentence closer to its adjacent sentence, away from a randomly occurring sentence in the corpus. For FastSent and Siamese CBOW, the test sentence representation is a simple average of word vectors obtained after training. Both of these models are general purpose sentence representation models trained on book corpus, yet give a competitive performance over previous models on the tweet semantic similarity computation task. BIBREF14 's model attempt to exploit these signals directly from Twitter. With the help of attention technique and learned user representation, this log-linear model is able to capture salient semantic information from chronologically adjacent tweets of a target tweet in users' Twitter timeline."
],
[
"Motivation: In recent times, building representation models based on supervision from richly structured resources such as Paraphrase Database (PPDB) BIBREF19 (containing noisy phrase pairs) has yielded high quality sentence representations. These methods work by maximizing the similarity of the sentences in the learned semantic space.",
"Models: CHARAGRAM BIBREF20 embeds textual sequences by learning a character-based compositional model that involves addition of the vectors of its character n-grams followed by an elementwise nonlinearity. This simpler architecture trained on PPDB is able to beat models with complex architectures like CNN, LSTM on SemEval 2015 Twitter textual similarity task by a large margin. This result emphasizes the importance of character-level models that address differences due to spelling variation and word choice. The authors in their subsequent work BIBREF21 conduct a comprehensive analysis of models spanning the range of complexity from word averaging to LSTMs for its ability to do transfer and supervised learning after optimizing a margin based loss on PPDB. For transfer learning, they find models based on word averaging perform well on both the in-domain and out-of-domain textual similarity tasks, beating LSTM model by a large margin. On the other hand, the word averaging models perform well for both sentence similarity and textual entailment tasks, outperforming the LSTM. However, for sentiment classification task, they find LSTM (trained on PPDB) to beat the averaging models to establish a new state of the art. The above results suggest that structured resources play a vital role in computing general-purpose embeddings useful in downstream applications."
],
[
"Motivation: The autoencoder based approach learns latent (or compressed) representation by reconstructing its own input. Since textual data like tweets contain discrete input signals, sequence-to-sequence models BIBREF22 like STV can be used to build the solution. The encoder model which encodes the input tweet can typically be a CNN BIBREF23 , recurrent models like RNN, GRU, LSTM BIBREF24 or memory networks BIBREF25 . The decoder model which generates the output tweet can typically be a recurrent model that predicts a output token at every time step.",
"Models: Sequential Denoising Autoencoders (SDAE) BIBREF17 is a LSTM-based sequence-to-sequence model, which is trained to recover the original data from the corrupted version. SDAE produces robust representations by learning to represent the data in terms of features that explain its important factors of variation. Tweet2Vec BIBREF3 is a recent model which uses a character-level CNN-LSTM encoder-decoder architecture trained to construct the input tweet directly. This model outperforms competitive models that work on word-level like PV-DM, PV-DBOW on semantic similarity computation and sentiment classification tasks, thereby showing that the character-level nature of Tweet2Vec is best-suited to deal with the noise and idiosyncrasies of tweets. Tweet2Vec controls the generalization error by using a data augmentation technique, wherein tweets are replicated and some of the words in the replicated tweets are replaced with their synonyms. Both SDAE and Tweet2Vec has the advantage that they don't need a coherent inter-sentence narrative (like STV), which is hard to obtain in Twitter."
],
[
"Motivation: In a weakly supervised setup, we create labels for a tweet automatically and predict them to learn potentially sophisticated models than those obtained by unsupervised learning alone. Examples of labels include sentiment of the overall tweet, words like hashtag present in the tweet and so on. This technique can create a huge labeled dataset especially for building data-hungry, sophisticated deep learning models.",
"Models: BIBREF26 learns sentiment-specific word embedding (SSWE), which encodes the polarity information in the word representations so that words with contrasting polarities and similar syntactic context (like good and bad) are pushed away from each other in the semantic space that it learns. SSWE utilizes the massive distant-supervised tweets collected by positive and negative emoticons to build a powerful tweet representation, which are shown to be useful in tasks such as sentiment classification and word similarity computation in sentiment lexicon. BIBREF2 observes that hashtags in tweets can be considered as topics and hence tweets with similar hashtags must come closer to each other. Their model predicts the hashtags by using a Bi-GRU layer to embed the tweets from its characters. Due to subword modeling, such character-level models can approximate the representations for rare words and new words (words not seen during training) in the test tweets really well. This model outperforms the word-level baselines for hashtag prediction task, thereby concluding that exploring character-level models for tweets is a worthy research direction to pursue. Both these works fail to study the model's generality BIBREF27 , i.e., the ability of the model to transfer the learned representations to diverse tasks."
],
[
"In this section we present the future research directions which we believe can be worth pursuing to generate high quality tweet embeddings."
],
[
"In this work we study the problem of learning unsupervised tweet representations. We believe our survey of the existing works based on the objective function can give vital perspectives to researchers and aid their understanding of the field. We also believe the future research directions studied in this work can help in breaking the barriers in building high quality, general purpose tweet representation models."
]
],
"section_name": [
"Introduction",
"Unsupervised Tweet Representation Models",
"Modeling within-tweet relationships",
"Modeling inter-tweet relationships",
"Modeling from structured resources",
"Modeling as an autoencoder",
"Modeling using weak supervision",
"Future Directions",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"ad46b6002d203076f0676f08d3d03364bdefd9cb",
"bfea9801642d4782a4c9c8c51fe289962c4804a0",
"cbc206ccba7ddd72f4755aeac717a069cf0c376d"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [
"Motivation: In recent times, building representation models based on supervision from richly structured resources such as Paraphrase Database (PPDB) BIBREF19 (containing noisy phrase pairs) has yielded high quality sentence representations. These methods work by maximizing the similarity of the sentences in the learned semantic space.",
"Models: Skip-thought vectors BIBREF15 (STV) is a widely popular sentence encoder, which is trained to predict adjacent sentences in the book corpus BIBREF16 . Although the testing is cheap as it involves a cheap forward propagation of the test sentence, STV is very slow to train thanks to its complicated model architecture. To combat this computational inefficiency, FastSent BIBREF17 propose a simple additive (log-linear) sentence model, which predicts adjacent sentences (represented as BOW) taking the BOW representation of some sentence in context. This model can exploit the same signal, but at a much lower computational expense. Parallel to this work, Siamase CBOW BIBREF18 develop a model which directly compares the BOW representation of two sentence to bring the embeddings of a sentence closer to its adjacent sentence, away from a randomly occurring sentence in the corpus. For FastSent and Siamese CBOW, the test sentence representation is a simple average of word vectors obtained after training. Both of these models are general purpose sentence representation models trained on book corpus, yet give a competitive performance over previous models on the tweet semantic similarity computation task. BIBREF14 's model attempt to exploit these signals directly from Twitter. With the help of attention technique and learned user representation, this log-linear model is able to capture salient semantic information from chronologically adjacent tweets of a target tweet in users' Twitter timeline."
],
"extractive_spans": [
" Paraphrase Database (PPDB) ",
" book corpus"
],
"free_form_answer": "",
"highlighted_evidence": [
"Motivation: In recent times, building representation models based on supervision from richly structured resources such as Paraphrase Database (PPDB) BIBREF19 (containing noisy phrase pairs) has yielded high quality sentence representations. ",
"Both of these models are general purpose sentence representation models trained on book corpus, yet give a competitive performance over previous models on the tweet semantic similarity computation task"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"35c504b8beb9bacec9a1fa9eced393e28a009ad6",
"7e816f6d4bea2f5eecae3ea07c705e4323609bef"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": false,
"yes_no": false
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"af41907ffc87939fb665db70d1a336da832c0f6b"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"8d7a77c56c2eff866c4b002644aeb8c30586b2ab"
],
"answer": [
{
"evidence": [
"Despite this, there is a lack of prior work which surveys the tweet-specific unsupervised representation learning models. In this work, we attempt to fill this gap by investigating the models in an organized fashion. Specifically, we group the models based on the objective function it optimizes. We believe this work can aid the understanding of the existing literature. We conclude the paper by presenting interesting future research directions, which we believe are fruitful in advancing this field by building high-quality tweet representation learning models.",
"There are various models spanning across different model architectures and objective functions in the literature to compute tweet representation in an unsupervised fashion. These models work in a semi-supervised way - the representations generated by the model is fed to an off-the-shelf predictor like Support Vector Machines (SVM) to solve a particular downstream task. These models span across a wide variety of neural network based architectures including average of word vectors, convolutional-based, recurrent-based and so on. We believe that the performance of these models is highly dependent on the objective function it optimizes – predicting adjacent word (within-tweet relationships), adjacent tweet (inter-tweet relationships), the tweet itself (autoencoder), modeling from structured resources like paraphrase databases and weak supervision. In this section, we provide the first of its kind survey of the recent tweet-specific unsupervised models in an organized fashion to understand the literature. Specifically, we categorize each model based on the optimized objective function as shown in Figure FIGREF1 . Next, we study each category one by one."
],
"extractive_spans": [],
"free_form_answer": "They group the existing works in terms of the objective function they optimize - within-tweet relationships, inter-tweet relationships, autoencoder, and weak supervision.",
"highlighted_evidence": [
"Specifically, we group the models based on the objective function it optimizes. We believe this work can aid the understanding of the existing literature. ",
"We believe that the performance of these models is highly dependent on the objective function it optimizes – predicting adjacent word (within-tweet relationships), adjacent tweet (inter-tweet relationships), the tweet itself (autoencoder), modeling from structured resources like paraphrase databases and weak supervision. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"Which dataset do they use?",
"Do they evaluate their learned representations on downstream tasks?",
"Which representation learning architecture do they adopt?",
"How do they encourage understanding of literature as part of their objective function?"
],
"question_id": [
"eda4869c67fe8bbf83db632275f053e7e0241e8c",
"2c7494d47b2a69f182e83455fe4c75ae3b2893e9",
"4d7ff4e5d06902de85b0e9a364dc455196d06a7d",
"ecc63972b2783ee39b3e522653cfb6dc5917d522"
],
"question_writer": [
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: Unsupervised Tweet Representation Models Hierarchy based on Optimized Objective Function"
],
"file": [
"2-Figure1-1.png"
]
} | [
"How do they encourage understanding of literature as part of their objective function?"
] | [
[
"1706.09673-Unsupervised Tweet Representation Models-0",
"1706.09673-Introduction-1"
]
] | [
"They group the existing works in terms of the objective function they optimize - within-tweet relationships, inter-tweet relationships, autoencoder, and weak supervision."
] | 162 |
1906.07662 | State-of-the-Art Vietnamese Word Segmentation | Word segmentation is the first step of any tasks in Vietnamese language processing. This paper reviews stateof-the-art approaches and systems for word segmentation in Vietnamese. To have an overview of all stages from building corpora to developing toolkits, we discuss building the corpus stage, approaches applied to solve the word segmentation and existing toolkits to segment words in Vietnamese sentences. In addition, this study shows clearly the motivations on building corpus and implementing machine learning techniques to improve the accuracy for Vietnamese word segmentation. According to our observation, this study also reports a few of achivements and limitations in existing Vietnamese word segmentation systems. | {
"paragraphs": [
[
"Lexical analysis, syntactic analysis, semantic analysis, disclosure analysis and pragmatic analysis are five main steps in natural language processing BIBREF0 , BIBREF1 . While morphology is a basic task in lexical analysis of English, word segmentation is considered a basic task in lexical analysis of Vietnamese and other East Asian languages processing. This task is to determine borders between words in a sentence. In other words, it is segmenting a list of tokens into a list of words such that words are meaningful.",
"Word segmentation is the primary step in prior to other natural language processing tasks i. e., term extraction and linguistic analysis (as shown in Figure 1). It identifies the basic meaningful units in input texts which will be processed in the next steps of several applications. For named entity recognization BIBREF2 , word segmentation chunks sentences in input documents into sequences of words before they are further classified in to named entity classes. For Vietnamese language, words and candidate terms can be extracted from Vietnamese copora (such as books, novels, news, and so on) by using a word segmentation tool. Conformed features and context of these words and terms are used to identify named entity tags, topic of documents, or function words. For linguistic analysis, several linguistic features from dictionaries can be used either to annotating POS tags or to identifying the answer sentences. Moreover, language models can be trained by using machine learning approaches and be used in tagging systems, like the named entity recognization system of Tran et al. BIBREF2 .",
"Many studies forcus on word segmentation for Asian languages, such as: Chinese, Japanese, Burmese (Myanmar) and Thai BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 . Approaches for word segmentation task are variety, from lexicon-based to machine learning-based methods. Recently, machine learning-based methods are used widely to solve this issue, such as: Support Vector Machine or Conditional Random Fields BIBREF7 , BIBREF8 . In general, Chinese is a language which has the most studies on the word segmentation issue. However, there is a lack of survey of word segmentation studies on Asian languages and Vietnamese as well. This paper aims reviewing state-of-the-art word segmentation approaches and systems applying for Vietnamese. This study will be a foundation for studies on Vietnamese word segmentation and other following Vietnamese tasks as well, such as part-of-speech tagger, chunker, or parser systems.",
"There are several studies about the Vietnamese word segmentation task over the last decade. Dinh et al. started this task with Weighted Finite State Transducer (WFST) approach and Neural Network approach BIBREF9 . In addition, machine learning approaches are studied and widely applied to natural language processing and word segmentation as well. In fact, several studies used support vector machines (SVM) and conditional random fields (CRF) for the word segmentation task BIBREF7 , BIBREF8 . Based on annotated corpora and token-based features, studies used machine learning approaches to build word segmentation systems with accuracy about 94%-97%.",
"According to our observation, we found that is lacks of complete review approaches, datasets and toolkits which we recently used in Vietnamese word segmentation. A all sided review of word segmentation will help next studies on Vietnamese natural language processing tasks have an up-to-date guideline and choose the most suitable solution for the task. The remaining part of the paper is organized as follows. Section II discusses building corpus in Vietnamese, containing linguistic issues and the building progress. Section III briefly mentions methods to model sentences and text in machine learning systems. Next, learning models and approaches for labeling and segmenting sequence data will be presented in Section IV. Section V mainly addresses two existing toolkits, vnTokenizer and JVnSegmenter, for Vietnamese word segmentation. Several experiments based on mentioned approaches and toolkits are described in Section VI. Finally, conclusions and future works are given in Section VII."
],
[
"Vietnamese, like many languages in continental East Asia, is an isolating language and one branch of Mon-Khmer language group. The most basic linguistic unit in Vietnamese is morpheme, similar with syllable or token in English and “hình vị” (phoneme) or “tiếng” (syllable) in Vietnamese. According to the structured rule of its, Vietnamese can have about 20,000 different syllables (tokens). However, there are about 8,000 syllables used the Vietnamese dictionaries. There are three methods to identify morphemes in Vietnamese text BIBREF10 .",
"Morpheme is the smallest meaningful unit of Vietnamese.",
"Morpheme is the basic unit of Vietnamese.",
"Morpheme is the smallest meaningful unit and is not used independently in the syntax factor.",
"In computational linguistics, morpheme is the basic unit of languages as Leonard Bloomfield mentioned for English BIBREF11 . In our research for Vietnamese, we consider the morpheme as syllable, called “tiếng” in Vietnamese (as Nguyen’s definition BIBREF12 ).",
"The next concept in linguistics is word which has fully grammar and meaning function in sentences. For Vietnamese, word is a single morpheme or a group of morphemes, which are fixed and have full meaning BIBREF12 . According to Nguyen, Vietnamese words are able classified into two types, (1) 1- syllable words with fully meaning and (2) n-syllables words whereas these group of tokens are fixed. Vietnamese syllable is not fully meaningful. However, it is also explained in the meaning and structure characteristics. For example, the token “kỳ” in “quốc kỳ” whereas “quốc” means national, “kỳ” means flag. Therefore, “quốc kỳ” means national flag.",
"Consider dictionary used for evaluating the corpus, extracting features for models, and evaluating the systems, there are many Vietnamese dictionaries, however we recommend the Vietnamese dictionary of Hoang Phe, so called Hoang Phe Dictionary. This dictionary has been built by a group of linguistical scientists at the Linguistic Institute, Vietnam. It was firstly published in 1988, reprinted and extended in 2000, 2005 and 2010. The dictionary currently has 45,757 word items with 15,901 Sino-Vietnamese word items (accounting for 34.75%) BIBREF13 ."
],
[
"In Vietnamese, not all of meaningful proper names are in the dictionary. Identifying proper names in input text are also important issue in word segmentation. This issue is sometimes included into unknown word issue to be solved. In addition, named entity recognition has to classify it into several types such as person, location, organization, time, money, number, and so on.",
"Proper name identification can be solved by characteristics. For example, systems use beginning characters of proper names which are uppercase characters. Moreover, a list of proper names is also used to identify names in the text. In particular, a list of 2000 personal names extracted from VietnamGiaPha, and a list of 707 names of locations in Vietnam extracted from vi.wikipedia.org are used in the study of Nguyen et al. for Vietnamese word segmentation BIBREF7 ."
],
[
"In general, building corpus is carried out through four stages: (1) choose target of corpus and source of raw data; (2) building a guideline based on linguistics knowledge for annotation; (3) annotating or tagging corpus based on rule set in the guideline; and (4) reviewing corpus to check the consistency issue.",
"Encoding word segmentation corpus using B-I-O tagset can be applied, where B, I, and O denoted begin of word, inside of word, and others, respectively. For example, the sentence “Megabit trên giây là đơn vị đo tốc đọ truyền dẫn dữ liệu .\" (”Megabit per second is a unit to measure the network traffic.” in English) with the word boundary result “Megabit trên giây là đơn_vị đo tốc_độ truyền_dẫn dữ_liệu .\" is encoded as “Megabit/B trên/B giây/B là/B đơn/B vị/I đo/B tốc/B độ/I truyền/B dẫn/I dữ/B liệu/I ./O\" .",
"Annotation guidelines can be applied to ensure that annotated corpus has less errors because the manual annotation is applied. Even though there are guidelines for annotating, the available output corpora are still inconsistent. For example, for the Vietnamese Treebank corpus of the VLSP project, Nguyen et al. listed out several Vietnamese word segmentation inconsistencies in the corpus based on POS information and n-gram sequences BIBREF14 .",
"Currently, there are at least three available word segmentation corpus used in Vietnamese word segmentation studies and systems. Firstly, Dinh et al. built the CADASA corpus from CADASA’s books BIBREF15 . Secondly, Nguyen et al. built vnQTAG corpus from general news articles BIBREF7 . More recently, Ngo et al. introduced the EVBCorpus corpus, which is collected from four sources, news articles, books, law documents, and novels. As a part of EVBCorpus, EVBNews, was annotated common tags in NLP, such as word segmentation, chunker, and named entity BIBREF16 . All of these corpora are collected from news articles or book stories, and they are manually annotated the word boundary tags (as shown in Table I)."
],
[
"To understand natural language and analyze documents and text, computers need to represent natural languages as linguistics models. These models can be generated by using machine learning methods (as show in Figure 2).",
"There are two common modeling methods for basic NLP tasks, including n-gram model and bag-of-words model. The n-gram model is widely used in natural language processing while the bag-of-words model is a simplified representation used in natural language processing and information retrieval BIBREF17 , BIBREF18 . According to the bag-of-words model, the representative vector of sentences in the document does not preserve the order of the words in the original sentences. It represents the word using term frequency collected from the document rather than the order of words or the structure of sentences in the document. The bag-of-words model is commonly used in methods of document classification, where the frequency of occurrence of each word is used as an attribute feature for training a classifier. In contrast, an n-gram is a contiguous sequence of n items from a given sequence of text. An n-gram model is a type of probabilistic language model for predicting the next item in a given sequence in form of a Markov model. To address word segmentation issue, the n-gram model is usually used for approaches because it considers the order of tokens in the original sentences. The sequence is also kept the original order as input and output sentences."
],
[
"There are several studies for Vietnamese Word Segmentation during last decade. For instance, Dinh et al. started the word segmentation task for Vietnamese with Neural Network and Weighted Finite State Transducer (WFST) BIBREF9 . Nguyen et al. continued with machine learning approaches, Conditional Random Fields and Support Vector Machine BIBREF7 . Most of statistical approaches are based on the architecture as shown in Figure 2. According to the architecture, recent studies and systems focus on either improving or modifying difference learning models to get the highest accuracy. Features used in word segmentation systems are syllable, dictionary, and entity name. The detail of all widely used techniques applied are collected and described in following subsections."
],
[
"Maximum matching (MM) is one of the most popular fundamental and structural segmentation algorithms for word segmentation BIBREF19 . This method is also considered as the Longest Matching (LM) in several research BIBREF9 , BIBREF3 . It is used for identifying word boundary in languages like Chinese, Vietnamese and Thai. This method is a greedy algorithm, which simply chooses longest words based on the dictionary. Segmentation may start from either end of the line without any difference in segmentation results. If the dictionary is sufficient BIBREF19 , the expected segmentation accuracy is over 90%, so it is a major advantage of maximum matching . However, it does not solve the problem of ambiguous words and unknown words that do not exist in the dictionary.",
"There are two types of the maximum matching approach: forward MM (FMM) and backward MM (BMM). FMM starts from the beginning token of the sentence while BMM starts from the end. If the sentence has word boundary ambiguities, the output of FMM and BMM will be different. When applying FMM and BMM, there are two types of common errors due to two ambiguities: overlapping ambiguities and combination ambiguity. Overlapping ambiguities occur when the text AB has both word A, B and AB, which are in the dictionary while the text ABC has word AB and BC, which are in the dictionary. For example, \"cụ già đi nhanh quá\" (there two meanings: ”the old man goes very fast” or ”the old man died suddenly”) is a case of the overlapping ambiguity while \"tốc độ truyền thông tin\" is a case of the combination ambiguity.",
"As shown in Figure 2, the method simplification ambiguities, maximum matching is the first step to get features for the modelling stage in machine learning systems, like Conditional Random Fields or Support Vector Machines."
],
[
"In Markov chain model is represented as a chain of tokens which are observations, and word taggers are represented as predicted labels. Many researchers applied Hidden Markov model to solve Vietnamese word segmentation such as in BIBREF8 , BIBREF20 and so on.",
"N-gram language modeling applied to estimate probabilities for each word segmentation solution BIBREF21 . The result of this method depends on copora and is based maximal matching strategy. So, they do not solve missing word issue. Let INLINEFORM0 is a product of probabilities of words created from sentence s (1) with length INLINEFORM1 : DISPLAYFORM0 ",
"Each conditional probability of word is based on the last n-1 words (n-gram) in the sentence s. It is estimated by Markov chain model for word w from position i-n+1 to i-1 with probability (2) DISPLAYFORM0 ",
"We have equation (3) DISPLAYFORM0 "
],
[
"Maximum Entropy theory is applied to solve Vietnamese word segmentation BIBREF15 , BIBREF22 , BIBREF23 . Some researchers do not want the limit in Markov chain model. So, they use the context around of the word needed to be segmented. Let h is a context, w is a list of words and t is a list of taggers, Le BIBREF15 , BIBREF22 used DISPLAYFORM0 ",
"P(s) is also a product of probabilities of words created from sentence INLINEFORM0 (1). Each conditional probability of word is based on context h of the last n word in the sentence s."
],
[
"To tokenize a Vietnamese word, in HMM or ME, authors only rely on features around a word segment position. Some other features are also affected by adding more special attributes, such as, in case ’?’ question mark at end of sentence, Part of Speech (POS), and so on. Conditional Random Fields is one of methods that uses additional features to improve the selection strategy BIBREF7 .",
"There are several CRF libraries, such as CRF++, CRFsuite. These machine learning toolkits can be used to solve the task by providing an annotated corpus with extracted features. The toolkit will be used to train a model based on the corpus and extract a tagging model. The tagging model will then be used to tag on input text without annotated corpus. In the training and tagging stages, extracting features from the corpus and the input text is necessary for both stages."
],
[
"Support Vector Machines (SVM) is a supervised machine learning method which considers dataset as a set of vectors and tries to classify them into specific classes. Basically, SVM is a binary classifier. however, most classification tasks are multi-class classifiers. When applying SVMs, the method has been extended to classify three or more classes. Particular NLP tasks, like word segmentation and Part-of-speech task, each token/word in documents will be used as a feature vector. For the word segmentation task, each token and its features are considered as a vector for the whole document, and the SVM model will classify this vector into one of the three tags (B-IO).",
"This technique is applied for Vietnamese word segmentation in several studies BIBREF7 , BIBREF24 . Nguyen et al. applied on a segmented corpus of 8,000 sentences and got the result at 94.05% while Ngo et al. used it with 45,531 segmented sentences and get the result at 97.2%. It is worth to mention that general SVM libraries (such as LIBSVM, LIBLINEAR, SVMlight, Node-SVM, and TreeSVM ), YamCha is an opened source SVM library that serves several NLP tasks: POS tagging, Named Entity Recognition, base NP chunking, Text Chunking, Text Classification and event Word Segmentation."
],
[
"vnTokenizer and JVnSegmenter are two famous segmentation toolkits for Vietnamese word segmentation. Both two word segmentation toolkits are implemented the word segmentation data process in Figure 2. This section gives more details of these Vietnamese word toolkits."
],
[
"In general, Java and C++ are the most common language in developing toolkits and systems for natural language processing tasks. For example, GATE, OpenNLP, Stanford CoreNLP and LingPipe platforms are developed by JAVA while foundation tasks and machine learning toolkits are developed by C++. CRF++, SVMLight and YAMCHA . Recently, Python becomes popular among the NLP community. In fact, many toolkits and platforms have been developed by this language, such as NLTK, PyNLPl library for Natural Language Processing."
],
[
"JVnSegmenter is a Java-based Vietnamese Word Segmentation Tool developed by Nguyen and Phan. The segmentation model in this tool was trained on about 8,000 tagged Vietnamese text sentences based on CRF model and the model extracted over 151,000 words from the training corpus. In addition, this is used in building the EnglishVietnamese Translation System BIBREF25 , BIBREF26 , BIBREF27 . Vietnamese text classification BIBREF28 and building Vietnamese corpus BIBREF29 , BIBREF30 ."
],
[
"vnTokenizer is implemented in Java and bundled as Eclipse plug-in, and it has already been integrated into vnToolkit, an Eclipse Rich Client application, which is intended to be a general framework integrating tools for processing of Vietnamese text. vnTokenizer plug-in, vnToolkit and related resources, including the lexicon and test corpus are freely available for download. According to our observation, many research cited vnTokenizer to use word segmentation results for applications as building a large Vietnamese corpus BIBREF31 , building an English-Vietnamese Bilingual Corpus for Machine Translation BIBREF32 , Vietnamese text classification BIBREF33 , BIBREF34 , etc."
],
[
"This research gathers the results of Vietnamese word segmentation of several methods into one table as show in Table II. It is noted that they are not evaluated on a same corpus. The purpose of the result illustration is to provide an overview of the results of current Vietnamese word segmentation systems based on their individual features. All studies mentioned in the table have accuracy around 94-97% based on their provided corpus.",
"This study also evaluates the Vietnamese word segmentation based on existing toolkits using the same annotated Vietnamese word segmentation corpus. There are two available toolkits to evaluate and to segment. To be neutral to both toolkits, we use the EVBNews Vietnamese corpus, a part of EVBCorpus, to evaluate Vietnamese word segmentation. The EVBNews corpus contains over 45,000 segmented Vietnamese sentences extracted from 1,000 general news articles (as shown in Table III) BIBREF16 . We used the same training set which has 1000 files and 45,531 sentences. vnTokenizer outputs 831,455 Vietnamese words and 1,206,475 tokens. JVnSegmenter outputs 840,387 words and 1,201,683. We correct tags (BIO), and compare to previous outputs, we have rate from vnTokenizer is 95.6% and from JVnsegmenter is 93.4%. The result of both vnTokenizer and JVnSegmenter testing on the EVBNews Vietnamese Corpus are provided in Table IV."
],
[
"This study reviewed state-of-the-art approaches and systems of Vietnamese word segmentation. The review pointed out common features and methods used in Vietnamese word segmentation studies. This study also had an evaluation of the existing Vietnamese word segmentation toolkits based on a same corpus to show advantages and disadvantages as to shed some lights on system enhancement.",
"There are several challenges on supervised learning approaches in future work. The first challenge is to acquire very large Vietnamese corpus and to use them in building a classifier, which could further improve accuracy. In addition, applying linguistics knowledge on word context to extract useful features also enhances prediction performance. The second challenge is design and development of big data warehouse and analytic framework for Vietnamese documents, which corresponds to the rapid and continuous growth of gigantic volume of articles and/or documents from Web 2.0 applications, such as, Facebook, Twitter, and so on. It should be addressed that there are many kinds of Vietnamese documents, for example, Han - Nom documents and old and modern Vietnamese documents that are essential and still needs further analysis. According to our study, there is no a powerful Vietnamese language processing used for processing Vietnamese big data as well as understanding such language. The final challenge relates to building a system, which is able to incrementally learn new corpora and interactively process feedback. In particular, it is feasible to build an advance NLP system for Vietnamese based on Hadoop platform to improve system performance and to address existing limitations."
]
],
"section_name": [
"Introduction",
"Language Definition",
"Name Entity Issue",
"Building Corpus",
"TEXT MODELLING AND FEATURES",
"BUILDING MODEL METHODS",
"Maximum Matching",
"Hidden Markov Model (HMM)",
"Maximum Entropy (ME)",
"Conditional Random Fields",
"Support Vector Machines",
"TOOLKITS",
"Programming Languages",
"JVnSegmenter",
"vnTokenizer",
"EVALUATION AND RESULTS",
"CONCLUSIONS AND FUTURE WORKS"
]
} | {
"answers": [
{
"annotation_id": [
"46027f5f566e811ebd2d7252a9c9241b41118264",
"bca6c995ec77f64bb73c174fcce08a0fe27a2b6e"
],
"answer": [
{
"evidence": [
"Maximum matching (MM) is one of the most popular fundamental and structural segmentation algorithms for word segmentation BIBREF19 . This method is also considered as the Longest Matching (LM) in several research BIBREF9 , BIBREF3 . It is used for identifying word boundary in languages like Chinese, Vietnamese and Thai. This method is a greedy algorithm, which simply chooses longest words based on the dictionary. Segmentation may start from either end of the line without any difference in segmentation results. If the dictionary is sufficient BIBREF19 , the expected segmentation accuracy is over 90%, so it is a major advantage of maximum matching . However, it does not solve the problem of ambiguous words and unknown words that do not exist in the dictionary."
],
"extractive_spans": [
" ambiguous words",
"unknown words"
],
"free_form_answer": "",
"highlighted_evidence": [
"Maximum matching (MM) is one of the most popular fundamental and structural segmentation algorithms for word segmentation BIBREF19 . ",
"However, it does not solve the problem of ambiguous words and unknown words that do not exist in the dictionary."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"According to our observation, we found that is lacks of complete review approaches, datasets and toolkits which we recently used in Vietnamese word segmentation. A all sided review of word segmentation will help next studies on Vietnamese natural language processing tasks have an up-to-date guideline and choose the most suitable solution for the task. The remaining part of the paper is organized as follows. Section II discusses building corpus in Vietnamese, containing linguistic issues and the building progress. Section III briefly mentions methods to model sentences and text in machine learning systems. Next, learning models and approaches for labeling and segmenting sequence data will be presented in Section IV. Section V mainly addresses two existing toolkits, vnTokenizer and JVnSegmenter, for Vietnamese word segmentation. Several experiments based on mentioned approaches and toolkits are described in Section VI. Finally, conclusions and future works are given in Section VII."
],
"extractive_spans": [
"lacks of complete review approaches, datasets and toolkits "
],
"free_form_answer": "",
"highlighted_evidence": [
"According to our observation, we found that is lacks of complete review approaches, datasets and toolkits which we recently used in Vietnamese word segmentation. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"361c83b9f73bf03e5fe0851b74b98c7c6b020d46",
"bdb04e7c7e77f60aa6d56368bbbcaecf84403e4b"
],
"answer": [
{
"evidence": [
"There are several challenges on supervised learning approaches in future work. The first challenge is to acquire very large Vietnamese corpus and to use them in building a classifier, which could further improve accuracy. In addition, applying linguistics knowledge on word context to extract useful features also enhances prediction performance. The second challenge is design and development of big data warehouse and analytic framework for Vietnamese documents, which corresponds to the rapid and continuous growth of gigantic volume of articles and/or documents from Web 2.0 applications, such as, Facebook, Twitter, and so on. It should be addressed that there are many kinds of Vietnamese documents, for example, Han - Nom documents and old and modern Vietnamese documents that are essential and still needs further analysis. According to our study, there is no a powerful Vietnamese language processing used for processing Vietnamese big data as well as understanding such language. The final challenge relates to building a system, which is able to incrementally learn new corpora and interactively process feedback. In particular, it is feasible to build an advance NLP system for Vietnamese based on Hadoop platform to improve system performance and to address existing limitations."
],
"extractive_spans": [],
"free_form_answer": "Acquire very large Vietnamese corpus and build a classifier with it, design a develop a big data warehouse and analytic framework, build a system to incrementally learn new corpora and interactively process feedback.",
"highlighted_evidence": [
"The first challenge is to acquire very large Vietnamese corpus and to use them in building a classifier, which could further improve accuracy. ",
"The second challenge is design and development of big data warehouse and analytic framework for Vietnamese documents, which corresponds to the rapid and continuous growth of gigantic volume of articles and/or documents from Web 2.0 applications, such as, Facebook, Twitter, and so on.",
"The final challenge relates to building a system, which is able to incrementally learn new corpora and interactively process feedback."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"There are several challenges on supervised learning approaches in future work. The first challenge is to acquire very large Vietnamese corpus and to use them in building a classifier, which could further improve accuracy. In addition, applying linguistics knowledge on word context to extract useful features also enhances prediction performance. The second challenge is design and development of big data warehouse and analytic framework for Vietnamese documents, which corresponds to the rapid and continuous growth of gigantic volume of articles and/or documents from Web 2.0 applications, such as, Facebook, Twitter, and so on. It should be addressed that there are many kinds of Vietnamese documents, for example, Han - Nom documents and old and modern Vietnamese documents that are essential and still needs further analysis. According to our study, there is no a powerful Vietnamese language processing used for processing Vietnamese big data as well as understanding such language. The final challenge relates to building a system, which is able to incrementally learn new corpora and interactively process feedback. In particular, it is feasible to build an advance NLP system for Vietnamese based on Hadoop platform to improve system performance and to address existing limitations."
],
"extractive_spans": [
"to acquire very large Vietnamese corpus and to use them in building a classifier",
" design and development of big data warehouse and analytic framework for Vietnamese documents",
"to building a system, which is able to incrementally learn new corpora and interactively process feedback"
],
"free_form_answer": "",
"highlighted_evidence": [
"The first challenge is to acquire very large Vietnamese corpus and to use them in building a classifier, which could further improve accuracy. ",
"The second challenge is design and development of big data warehouse and analytic framework for Vietnamese documents, which corresponds to the rapid and continuous growth of gigantic volume of articles and/or documents from Web 2.0 applications, such as, Facebook, Twitter, and so on. It should be addressed that there are many kinds of Vietnamese documents, for example, Han - Nom documents and old and modern Vietnamese documents that are essential and still needs further analysis. ",
" The final challenge relates to building a system, which is able to incrementally learn new corpora and interactively process feedback. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"5b455d547e1d00eb13fb47bf02bb4559a6dc77e7"
],
"answer": [
{
"evidence": [
"There are several studies about the Vietnamese word segmentation task over the last decade. Dinh et al. started this task with Weighted Finite State Transducer (WFST) approach and Neural Network approach BIBREF9 . In addition, machine learning approaches are studied and widely applied to natural language processing and word segmentation as well. In fact, several studies used support vector machines (SVM) and conditional random fields (CRF) for the word segmentation task BIBREF7 , BIBREF8 . Based on annotated corpora and token-based features, studies used machine learning approaches to build word segmentation systems with accuracy about 94%-97%."
],
"extractive_spans": [],
"free_form_answer": "Their accuracy in word segmentation is about 94%-97%.",
"highlighted_evidence": [
"Based on annotated corpora and token-based features, studies used machine learning approaches to build word segmentation systems with accuracy about 94%-97%."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"7d4e72e453b35e2b121f63db2ef4e911fc847981",
"e1980d8aef79209f4fc6cc2de8b2ba84db1dd7fa"
],
"answer": [
{
"evidence": [
"Maximum Entropy theory is applied to solve Vietnamese word segmentation BIBREF15 , BIBREF22 , BIBREF23 . Some researchers do not want the limit in Markov chain model. So, they use the context around of the word needed to be segmented. Let h is a context, w is a list of words and t is a list of taggers, Le BIBREF15 , BIBREF22 used DISPLAYFORM0",
"There are several studies about the Vietnamese word segmentation task over the last decade. Dinh et al. started this task with Weighted Finite State Transducer (WFST) approach and Neural Network approach BIBREF9 . In addition, machine learning approaches are studied and widely applied to natural language processing and word segmentation as well. In fact, several studies used support vector machines (SVM) and conditional random fields (CRF) for the word segmentation task BIBREF7 , BIBREF8 . Based on annotated corpora and token-based features, studies used machine learning approaches to build word segmentation systems with accuracy about 94%-97%."
],
"extractive_spans": [
"Maximum Entropy",
"Weighted Finite State Transducer (WFST)",
" support vector machines (SVM)",
"conditional random fields (CRF)"
],
"free_form_answer": "",
"highlighted_evidence": [
"Maximum Entropy theory is applied to solve Vietnamese word segmentation BIBREF15 , BIBREF22 , BIBREF23 .",
"There are several studies about the Vietnamese word segmentation task over the last decade. Dinh et al. started this task with Weighted Finite State Transducer (WFST) approach and Neural Network approach BIBREF9 . In addition, machine learning approaches are studied and widely applied to natural language processing and word segmentation as well. In fact, several studies used support vector machines (SVM) and conditional random fields (CRF) for the word segmentation task BIBREF7 , BIBREF8 ."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Maximum matching (MM) is one of the most popular fundamental and structural segmentation algorithms for word segmentation BIBREF19 . This method is also considered as the Longest Matching (LM) in several research BIBREF9 , BIBREF3 . It is used for identifying word boundary in languages like Chinese, Vietnamese and Thai. This method is a greedy algorithm, which simply chooses longest words based on the dictionary. Segmentation may start from either end of the line without any difference in segmentation results. If the dictionary is sufficient BIBREF19 , the expected segmentation accuracy is over 90%, so it is a major advantage of maximum matching . However, it does not solve the problem of ambiguous words and unknown words that do not exist in the dictionary.",
"In Markov chain model is represented as a chain of tokens which are observations, and word taggers are represented as predicted labels. Many researchers applied Hidden Markov model to solve Vietnamese word segmentation such as in BIBREF8 , BIBREF20 and so on.",
"Maximum Entropy theory is applied to solve Vietnamese word segmentation BIBREF15 , BIBREF22 , BIBREF23 . Some researchers do not want the limit in Markov chain model. So, they use the context around of the word needed to be segmented. Let h is a context, w is a list of words and t is a list of taggers, Le BIBREF15 , BIBREF22 used DISPLAYFORM0",
"To tokenize a Vietnamese word, in HMM or ME, authors only rely on features around a word segment position. Some other features are also affected by adding more special attributes, such as, in case ’?’ question mark at end of sentence, Part of Speech (POS), and so on. Conditional Random Fields is one of methods that uses additional features to improve the selection strategy BIBREF7 .",
"Support Vector Machines (SVM) is a supervised machine learning method which considers dataset as a set of vectors and tries to classify them into specific classes. Basically, SVM is a binary classifier. however, most classification tasks are multi-class classifiers. When applying SVMs, the method has been extended to classify three or more classes. Particular NLP tasks, like word segmentation and Part-of-speech task, each token/word in documents will be used as a feature vector. For the word segmentation task, each token and its features are considered as a vector for the whole document, and the SVM model will classify this vector into one of the three tags (B-IO)."
],
"extractive_spans": [
"Maximum matching",
"Hidden Markov model ",
"Maximum Entropy",
"Conditional Random Fields ",
"Support Vector Machines"
],
"free_form_answer": "",
"highlighted_evidence": [
"Maximum matching (MM) is one of the most popular fundamental and structural segmentation algorithms for word segmentation BIBREF19 . ",
"Many researchers applied Hidden Markov model to solve Vietnamese word segmentation such as in BIBREF8 , BIBREF20 and so on.",
"Maximum Entropy theory is applied to solve Vietnamese word segmentation BIBREF15 , BIBREF22 , BIBREF23 . ",
"Conditional Random Fields is one of methods that uses additional features to improve the selection strategy BIBREF7 .",
"Support Vector Machines (SVM) is a supervised machine learning method which considers dataset as a set of vectors and tries to classify them into specific classes. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five"
],
"paper_read": [
"",
"",
"",
""
],
"question": [
"What are the limitations of existing Vietnamese word segmentation systems?",
"Why challenges does word segmentation in Vietnamese pose?",
"How successful are the approaches used to solve word segmentation in Vietnamese?",
"Which approaches have been applied to solve word segmentation in Vietnamese?"
],
"question_id": [
"8d074aabf4f51c8455618c5bf7689d3f62c4da1d",
"fe2666ace293b4bfac3182db6d0c6f03ea799277",
"70a1b0f9f26f1b82c14783f1b76dfb5400444aa4",
"d3ca5f1814860a88ff30761fec3d860d35e39167"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
""
]
} | {
"caption": [
"Figure 1. Word segmentation tasks (blue) in the Vietnamese natural lanuage processing system.",
"Figure 2. Word segmentation data process",
"Table II EVBNEWS VIETNAMESE CORPUS",
"Table III EVBNEWS VIETNAMESE CORPUS",
"Table IV VIETNAMESE WORD SEGMENTATION RESULT OF VNTOKENIZER AND JVNSEGMENTER"
],
"file": [
"1-Figure1-1.png",
"3-Figure2-1.png",
"6-TableII-1.png",
"6-TableIII-1.png",
"6-TableIV-1.png"
]
} | [
"Why challenges does word segmentation in Vietnamese pose?",
"How successful are the approaches used to solve word segmentation in Vietnamese?"
] | [
[
"1906.07662-CONCLUSIONS AND FUTURE WORKS-1"
],
[
"1906.07662-Introduction-3"
]
] | [
"Acquire very large Vietnamese corpus and build a classifier with it, design a develop a big data warehouse and analytic framework, build a system to incrementally learn new corpora and interactively process feedback.",
"Their accuracy in word segmentation is about 94%-97%."
] | 163 |
2002.12612 | A multi-layer approach to disinformation detection on Twitter | We tackle the problem of classifying news articles pertaining to disinformation vs mainstream news by solely inspecting their diffusion mechanisms on Twitter. Our technique is inherently simple compared to existing text-based approaches, as it allows to by-pass the multiple levels of complexity which are found in news content (e.g. grammar, syntax, style). We employ a multi-layer representation of Twitter diffusion networks, and we compute for each layer a set of global network features which quantify different aspects of the sharing process. Experimental results with two large-scale datasets, corresponding to diffusion cascades of news shared respectively in the United States and Italy, show that a simple Logistic Regression model is able to classify disinformation vs mainstream networks with high accuracy (AUROC up to 94%), also when considering the political bias of different sources in the classification task. We also highlight differences in the sharing patterns of the two news domains which appear to be country-independent. We believe that our network-based approach provides useful insights which pave the way to the future development of a system to detect misleading and harmful information spreading on social media. | {
"paragraphs": [
[
"In recent years there has been increasing interest on the issue of disinformation spreading on online social media. Global concern over false (or \"fake\") news as a threat to modern democracies has been frequently raised–ever since 2016 US Presidential elections–in correspondence of events of political relevance, where the proliferation of manipulated and low-credibility content attempts to drive and influence people opinions BIBREF0BIBREF1BIBREF2BIBREF3.",
"Researchers have highlighted several drivers for the diffusion of such malicious phenomenon, which include human factors (confirmation bias BIBREF4, naive realism BIBREF5), algorithmic biases (filter bubble effect BIBREF0), the presence of deceptive agents on social platforms (bots and trolls BIBREF6) and, lastly, the formation of echo chambers BIBREF7 where people polarize their opinions as they are insulated from contrary perspectives.",
"The problem of automatically detecting online disinformation news has been typically formulated as a binary classification task (i.e. credible vs non-credible articles), and tackled with a variety of different techniques, based on traditional machine learning and/or deep learning, which mainly differ in the dataset and the features they employ to perform the classification. We may distinguish three approaches: those built on content-based features, those based on features extracted from the social context, and those which combine both aspects. A few main challenges hinder the task, namely the impossibility to manually verify all news items, the lack of gold-standard datasets and the adversarial setting in which malicious content is created BIBREF3BIBREF6.",
"In this work we follow the direction pointed out in a few recent contributions on the diffusion of disinformation compared to traditional and objective information. These have shown that false news spread faster and deeper than true news BIBREF8, and that social bots and echo chambers play an important role in the diffusion of malicious content BIBREF6, BIBREF7. Therefore we focus on the analysis of spreading patterns which naturally arise on social platforms as a consequence of multiple interactions between users, due to the increasing trend in online sharing of news BIBREF0.",
"A deep learning framework for detection of fake news cascades is provided in BIBREF9, where the authors refer to BIBREF8 in order to collect Twitter cascades pertaining to verified false and true rumors. They employ geometric deep learning, a novel paradigm for graph-based structures, to classify cascades based on four categories of features, such as user profile, user activity, network and spreading, and content. They also observe that a few hours of propagation are sufficient to distinguish false news from true news with high accuracy. Diffusion cascades on Weibo and Twitter are analyzed in BIBREF10, where authors focus on highlighting different topological properties, such as the number of hops from the source or the heterogeneity of the network, to show that fake news shape diffusion networks which are highly different from credible news, even at early stages of propagation.",
"In this work, we consider the results of BIBREF11 as our baseline. The authors use off-the-shelf machine learning classifiers to accurately classify news articles leveraging Twitter diffusion networks. To this aim, they consider a set of basic features which can be qualitatively interpreted w.r.t to the social behavior of users sharing credible vs non-credible information. Their methodology is overall in accordance with BIBREF12, where authors successfully detect Twitter astroturfing content, i.e. political campaigns disguised as spontaneous grassroots, with a machine learning framework based on network features.",
"In this paper, we propose a classification framework based on a multi-layer formulation of Twitter diffusion networks. For each article we disentangle different social interactions on Twitter, namely tweets, retweets, mentions, replies and quotes, to accordingly build a diffusion network composed of multiple layers (on for each type of interaction), and we compute structural features separately for each layer. We pick a set of global network properties from the network science toolbox which can be qualitatively explained in terms of social dimensions and allow us to encode different networks with a tuple of features. These include traditional indicators, e.g. network density, number of strong/weak connected components and diameter, and more elaborated ones such as main K-core number BIBREF13 and structural virality BIBREF14. Our main research question is whether the use of a multi-layer, disentangled network yields a significant advance in terms of classification accuracy over a conventional single-layer diffusion network. Additionally, we are interested in understanding which of the above features, and in which layer, are most effective in the classification task.",
"We perform classification experiments with an off-the-shelf Logistic Regression model on two different datasets of mainstream and disinformation news shared on Twitter respectively in the United States and in Italy during 2019. In the former case we also account for political biases inherent to different news sources, referring to the procedure proposed in BIBREF2 to label different outlets. Overall we show that we are able to classify credible vs non-credible diffusion networks (and consequently news articles) with high accuracy (AUROC up to 94%), even when accounting for the political bias of sources (and training only on left-biased or right-biased articles). We observe that the layer of mentions alone conveys useful information for the classification, denoting a different usage of this functionality when sharing news belonging to the two news domains. We also show that most discriminative features, which are relative to the breadth and depth of largest cascades in different layers, are the same across the two countries.",
"The outline of this paper is the following: we first formulate the problem and describe data collection, network representation and structural properties employed for the classification; then we provide experimental results–classification performances, layer and feature importance analyses and a temporal classification evaluation–and finally we draw conclusions and future directions."
],
[
"In this work we formulate our classification problem as follows: given two classes of news articles, respectively $D$ (disinformation) and $M$ (mainstream), a set of news articles $A_i$ and associated class labels $C_i \\in \\lbrace D,M\\rbrace $, and a set of tweets $\\Pi _i=\\lbrace T_i^1, T_i^2, ...\\rbrace $ each of which contains an Uniform Resource Locator (URL) pointing explicitly to article $A_i$, predict the class $C_i$ of each article $A_i$. There is huge debate and controversy on a proper taxonomy of malicious and deceptive information BIBREF1BIBREF2BIBREF15BIBREF16BIBREF17BIBREF3BIBREF11. In this work we prefer the term disinformation to the more specific fake news to refer to a variety of misleading and harmful information. Therefore, we follow a source-based approach, a consolidated strategy also adopted by BIBREF6BIBREF16BIBREF2BIBREF1, in order to obtain relevant data for our analysis. We collected:",
"Disinformation articles, published by websites which are well-known for producing low-credibility content, false and misleading news reports as well as extreme propaganda and hoaxes and flagged as such by reputable journalists and fact-checkers;",
"Mainstream news, referring to traditional news outlets which deliver factual and credible information.",
"We believe that this is currently the most reliable classification approach, but it entails obvious limitations, as disinformation outlets may also publish true stories and likewise misinformation is sometimes reported on mainstream media. Also, given the choice of news sources, we cannot test whether our methodology is able to classify disinformation vs factual but not mainstream news which are published on niche, non-disinformation outlets."
],
[
"We collected tweets associated to a dozen US mainstream news websites, i.e. most trusted sources described in BIBREF18, with the Streaming API, and we referred to Hoaxy API BIBREF16 for what concerns tweets containing links to 100+ US disinformation outlets. We filtered out articles associated to less than 50 tweets. The resulting dataset contains overall $\\sim $1.7 million tweets for mainstream news, collected in a period of three weeks (February 25th, 2019-March 18th, 2019), which are associated to 6,978 news articles, and $\\sim $1.6 million tweets for disinformation, collected in a period of three months (January 1st, 2019-March 18th, 2019) for sake of balance of the two classes, which hold 5,775 distinct articles. Diffusion censoring effects BIBREF14 were correctly taken into account in both collection procedures. We provide in Figure FIGREF4 the distribution of articles by source and political bias for both news domains.",
"As it is reported that conservatives and liberals exhibit different behaviors on online social platforms BIBREF19BIBREF20BIBREF21, we further assigned a political bias label to different US outlets (and therefore news articles) following the procedure described in BIBREF2. In order to assess the robustness of our method, we performed classification experiments by training only on left-biased (or right-biased) outlets of both disinformation and mainstream domains and testing on the entire set of sources, as well as excluding particular sources that outweigh the others in terms of samples to avoid over-fitting."
],
[
"For what concerns the Italian scenario we first collected tweets with the Streaming API in a 3-week period (April 19th, 2019-May 5th, 2019), filtering those containing URLs pointing to Italian official newspapers websites as described in BIBREF22; these correspond to the list provided by the association for the verification of newspaper circulation in Italy (Accertamenti Diffusione Stampa). We instead referred to the dataset provided by BIBREF23 to obtain a set of tweets, collected continuously since January 2019 using the same Twitter endpoint, which contain URLs to 60+ Italian disinformation websites. In order to get balanced classes (April 5th, 2019-May 5th, 2019), we retained data collected in a longer period w.r.t to mainstream news. In both cases we filtered out articles with less than 50 tweets; overall this dataset contains $\\sim $160k mainstream tweets, corresponding to 227 news articles, and $\\sim $100k disinformation tweets, corresponding to 237 news articles. We provide in Figure FIGREF5 the distribution of articles according to distinct sources for both news domains. As in the US dataset, we took into account censoring effects BIBREF14 by excluding tweets published before (left-censoring) or after two weeks (right-censoring) from the beginning of the collection process.",
"The different volumes of news shared on Twitter in the two countries are due both to the different population size of US and Italy (320 vs 60 millions) but also to the different usage of Twitter platform (and social media in general) for news consumption BIBREF24. Both datasets analyzed in this work are available from the authors on request.",
"A crucial aspect in our approach is the capability to fully capturing sharing cascades on Twitter associated to news articles. It has been reported BIBREF25 that the Twitter streaming endpoint filters out tweets matching a given query if they exceed 1% of the global daily volume of shared tweets, which nowadays is approximately $5\\cdot 10^8$; however, as we always collected less than $10^6$ tweets per day, we did not incur in this issue and we thus gathered 100% of tweets matching our query."
],
[
"We built Twitter diffusion networks following an approach widely adopted in the literature BIBREF6BIBREF17BIBREF2. We remark that there is an unavoidable limitation in Twitter Streaming API, which does not allow to retrieve true re-tweeting cascades because re-tweets always point to the original source and not to intermediate re-tweeting users BIBREF8BIBREF14; thus we adopt the only viable approach based on Twitter's public availability of data. Besides, by disentangling different interactions with multiple layers we potentially reduce the impact of this limitation on the global network properties compared to the single-layer approach used in our baseline.",
"Using the notation described in BIBREF26. we employ a multi-layer representation for Twitter diffusion networks. Sociologists have indeed recognized decades ago that it is crucial to study social systems by constructing multiple social networks where different types of ties among same individuals are used BIBREF27. Therefore, for each news article we built a multi-layer diffusion network composed of four different layers, one for each type of social interaction on Twitter platform, namely retweet (RT), reply (R), quote (Q) and mention (M), as shown in Figure FIGREF11. These networks are not necessarily node-aligned, i.e. users might be missing in some layers. We do not insert \"dummy\" nodes to represent all users as it would have severe impact on the global network properties (e.g. number of weakly connected components). Alternatively one may look at each multi-layer diffusion network as an ensemble of individual graphs BIBREF26; since global network properties are computed separately for each layer, they are not affected by the presence of any inter-layer edges.",
"In our multi-layer representation, each layer is a directed graph where we add edges and nodes for each tweet of the layer type, e.g. for the RT layer: whenever user $a$ retweets account $b$ we first add nodes $a$ and $b$ if not already present in the RT layer, then we build an edge that goes from $b$ to $a$ if it does not exists or we increment the weight by 1. Similarly for the other layers: for the R layer edges go from user $a$ (who replies) to user $b$, for the Q layer edges go from user $b$ (who is quoted by) to user $a$ and for the M layer edges go from user $a$ (who mentions) to user $b$.",
"Note that, by construction, our layers do not include isolated nodes; they correspond to \"pure tweets\", i.e. tweets which have not originated any interactions with other users. However, they are present in our dataset, and their number is exploited for classification, as described below."
],
[
"We used a set of global network indicators which allow us to encode each network layer by a tuple of features. Then we simply concatenated tuples as to represent each multi-layer network with a single feature vector. We used the following global network properties:",
"Number of Strongly Connected Components (SCC): a Strongly Connected Component of a directed graph is a maximal (sub)graph where for each pair of vertices $u,v$ there is a path in each direction ($u\\rightarrow v$, $v\\rightarrow u$).",
"Size of the Largest Strongly Connected Component (LSCC): the number of nodes in the largest strongly connected component of a given graph.",
"Number of Weakly Connected Components (WCC): a Weakly Connected Component of a directed graph is a maximal (sub)graph where for each pair of vertices $(u, v)$ there is a path $u \\leftrightarrow v$ ignoring edge directions.",
"Size of the Largest Weakly Connected Component (LWCC): the number of nodes in the largest weakly connected component of a given graph.",
"Diameter of the Largest Weakly Connected Component (DWCC): the largest distance (length of the shortest path) between two nodes in the (undirected version of) largest weakly connected component of a graph.",
"Average Clustering Coefficient (CC): the average of the local clustering coefficients of all nodes in a graph; the local clustering coefficient of a node quantifies how close its neighbours are to being a complete graph (or a clique). It is computed according to BIBREF28.",
"Main K-core Number (KC): a K-core BIBREF13 of a graph is a maximal sub-graph that contains nodes of internal degree $k$ or more; the main K-core number is the highest value of $k$ (in directed graphs the total degree is considered).",
"Density (d): the density for directed graphs is $d=\\frac{|E|}{|V||V-1|}$, where $|E|$ is the number of edges and $|N|$ is the number of vertices in the graph; the density equals 0 for a graph without edges and 1 for a complete graph.",
"Structural virality of the largest weakly connected component (SV): this measure is defined in BIBREF14 as the average distance between all pairs of nodes in a cascade tree or, equivalently, as the average depth of nodes, averaged over all nodes in turn acting as a root; for $|V| > 1$ vertices, $SV=\\frac{1}{|V||V-1|}\\sum _i\\sum _j d_{ij}$ where $d_{ij}$ denotes the length of the shortest path between nodes $i$ and $j$. This is equivalent to compute the Wiener's index BIBREF29 of the graph and multiply it by a factor $\\frac{1}{|V||V-1|}$. In our case we computed it for the undirected equivalent graph of the largest weakly connected component, setting it to 0 whenever $V=1$.",
"We used networkx Python package BIBREF30 to compute all features. Whenever a layer is empty. we simply set to 0 all its features. In addition to computing the above nine features for each layer, we added two indicators for encoding information about pure tweets, namely the number T of pure tweets (containing URLs to a given news article) and the number U of unique users authoring those tweets. Therefore, a single diffusion network is represented by a vector with $9\\cdot 4+2=38$ entries."
],
[
"Aforementioned network properties can be qualitatively explained in terms of social footprints as follows: SCC correlates with the size of the diffusion network, as the propagation of news occurs in a broadcast manner most of the time, i.e. re-tweets dominate on other interactions, while LSCC allows to distinguish cases where such mono-directionality is somehow broken. WCC equals (approximately) the number of distinct diffusion cascades pertaining to each news article, with exceptions corresponding to those cases where some cascades merge together via Twitter interactions such as mentions, quotes and replies, and accordingly LWCC and DWCC equals the size and the depth of the largest cascade. CC corresponds to the level of connectedness of neighboring users in a given diffusion network whereas KC identifies the set of most influential users in a network and describes the efficiency of information spreading BIBREF17. Finally, d describes the proportions of potential connections between users which are actually activated and SV indicates whether a news item has gained popularity with a single and large broadcast or in a more viral fashion through multiple generations.",
"For what concerns different Twitter actions, users primarily interact with each other using retweets and mentions BIBREF20.",
"The former are the main engagement activity and act as a form of endorsement, allowing users to rebroadcast content generated by other users BIBREF31. Besides, when node B retweets node A we have an implicit confirmation that information from A appeared in B's Twitter feed BIBREF12. Quotes are simply a special case of retweets with comments.",
"Mentions usually include personal conversations as they allow someone to address a specific user or to refer to an individual in the third person; in the first case they are located at the beginning of a tweet and they are known as replies, otherwise they are put in the body of a tweet BIBREF20. The network of mentions is usually seen as a stronger version of interactions between Twitter users, compared to the traditional graph of follower/following relationships BIBREF32."
],
[
"We performed classification experiments using a basic off-the-shelf classifier, namely Logistic Regression (LR) with L2 penalty; this also allows us to compare results with our baseline. We applied a standardization of the features and we used the default configuration for parameters as described in scikit-learn package BIBREF33. We also tested other classifiers (such as K-Nearest Neighbors, Support Vector Machines and Random Forest) but we omit results as they give comparable performances. We remark that our goal is to show that a very simple machine learning framework, with no parameter tuning and optimization, allows for accurate results with our network-based approach.",
"We used the following evaluation metrics to assess the performances of different classifiers (TP=true positives, FP=false positives, FN=false negatives):",
"Precision = $\\frac{TP}{TP+FP}$, the ability of a classifier not to label as positive a negative sample.",
"Recall = $\\frac{TP}{TP+FN}$, the ability of a classifier to retrieve all positive samples.",
"F1-score = $2 \\frac{\\mbox{Precision} \\cdot \\mbox{Recall}}{\\mbox{Precision} + \\mbox{Recall}}$, the harmonic average of Precision and Recall.",
"Area Under the Receiver Operating Characteristic curve (AUROC); the Receiver Operating Characteristic (ROC) curve BIBREF34, which plots the TP rate versus the FP rate, shows the ability of a classifier to discriminate positive samples from negative ones as its threshold is varied; the AUROC value is in the range $[0, 1]$, with the random baseline classifier holding AUROC$=0.5$ and the ideal perfect classifier AUROC$=1$; thus larger AUROC values (and steeper ROCs) correspond to better classifiers.",
"In particular we computed so-called macro average–simple unweighted mean–of these metrics evaluated considering both labels (disinformation and mainstream). We employed stratified shuffle split cross validation (with 10 folds) to evaluate performances.",
"Finally, we partitioned networks according to the total number of unique users involved in the sharing, i.e. the number of nodes in the aggregated network represented with a single-layer representation considering together all layers and also pure tweets. A breakdown of both datasets according to size class (and political biases for the US scenario) is provided in Table 1 and Table 2."
],
[
"In Table 3 we first provide classification performances on the US dataset for the LR classifier evaluated on the size class described in Table 1. We can observe that in all instances our methodology performs better than a random classifier (50% AUROC), with AUROC values above 85% in all cases.",
"For what concerns political biases, as the classes of mainstream and disinformation networks are not balanced (e.g., 1,292 mainstream and 4,149 disinformation networks with right bias) we employ a Balanced Random Forest with default parameters (as provided in imblearn Python package BIBREF35). In order to test the robustness of our methodology, we trained only on left-biased networks or right-biased networks and tested on the entire set of sources (relative to the US dataset); we provide a comparison of AUROC values for both biases in Figure 4. We can notice that our multi-layer approach still entails significant results, thus showing that it can accurately distinguish mainstream news from disinformation regardless of the political bias. We further corroborated this result with additional classification experiments, that show similar performances, in which we excluded from the training/test set two specific sources (one at a time and both at the same time) that outweigh the others in terms of data samples–respectively \"breitbart.com\" for right-biased sources and \"politicususa.com\" for left-biased ones.",
"We performed classification experiments on the Italian dataset using the LR classifier and different size classes (we excluded $[1000, +\\infty )$ which is empty); we show results for different evaluation metrics in Table 3. We can see that despite the limited number of samples (one order of magnitude smaller than the US dataset) the performances are overall in accordance with the US scenario. As shown in Table 4, we obtain results which are much better than our baseline in all size classes (see Table 4):",
"In the US dataset our multi-layer methodology performs much better in all size classes except for large networks ($[1000, +\\infty )$ size class), reaching up to 13% improvement on smaller networks ($[0, 100)$ size class);",
"In the IT dataset our multi-layer methodology outperforms the baseline in all size classes, with the maximum performance gain (20%) on medium networks ($[100, 1000)$ size class); the baseline generally reaches bad performances compared to the US scenario.",
"Overall, our performances are comparable with those achieved by two state-of-the-art deep learning models for \"fake news\" detection BIBREF9BIBREF36."
],
[
"In order to understand the impact of each layer on the performances of classifiers, we performed additional experiments considering separately each layer (we ignored T and U features relative to pure tweets). In Table 5 we show metrics for each layer and all size classes, computed with a 10-fold stratified shuffle split cross validation, evaluated on the US dataset; in Figure 5 we show AUROC values for each layer compared with the general multi-layer approach. We can notice that both Q and M layers alone capture adequately the discrepancies of the two distinct news domains in the United States as they obtain good results with AUROC values in the range 75%-86%; these are comparable with those of the multi-layer approach which, nevertheless, outperforms them across all size classes.",
"We obtained similar performances for the Italian dataset, as the M layer obtains comparable performances w.r.t multi-layer approach with AUROC values in the range 72%-82%. We do not show these results for sake of conciseness."
],
[
"We further investigated the importance of each feature by performing a $\\chi ^2$ test, with 10-fold stratified shuffle split cross validation, considering the entire range of network sizes $[0, +\\infty )$. We show the Top-5 most discriminative features for each country in Table 6.",
"We can notice the exact same set of features (with different relative orderings in the Top-3) in both countries; these correspond to two global network propertie–LWCC, which indicates the size of the largest cascade in the layer, and SCC, which correlates with the size of the network–associated to the same set of layers (Quotes, Retweets and Mentions).",
"We further performed a $\\chi ^2$ test to highlight the most discriminative features in the M layer of both countries, which performed equally well in the classification task as previously highlighted; also in this case we focused on the entire range of network sizes $[0, +\\infty )$. Interestingly, we discovered exactly the same set of Top-3 features in both countries, namely LWCC, SCC and DWCC (which indicates the depth of the largest cascade in the layer).",
"An inspection of the distributions of all aforementioned features revealed that disinformation news exhibit on average larger values than mainstream news.",
"We can qualitatively sum up these results as follows:",
"Sharing patterns in the two news domains exhibit discrepancies which might be country-independent and due to the content that is being shared.",
"Interactions in disinformation sharing cascades tends to be broader and deeper than in mainstream news, as widely reported in the literature BIBREF8BIBREF2BIBREF7.",
"Users likely make a different usage of mentions when sharing news belonging to the two domains, consequently shaping different sharing patterns."
],
[
"Similar to BIBREF9, we carried out additional experiments to answer the following question: how long do we need to observe a news spreading on Twitter in order to accurately classify it as disinformation or mainstream?",
"With this goal, we built several versions of our original dataset of multi-layer networks by considering in turn the following lifetimes: 1 hour, 6 hours, 12 hours, 1 day, 2 days, 3 days and 7 days; for each case, we computed the global network properties of the corresponding network and evaluated the LR classifier with 10-fold cross validation, separately for each lifetime (and considering always the entire set of networks). We show corresponding AUROC values for both US and IT datasets in Figure 6.",
"We can see that in both countries news diffusion networks can be accurately classified after just a few hours of spreading, with AUROC values which are larger than 80% after only 6 hours of diffusion. These results are very promising and suggest that articles pertaining to the two news domains exhibit discrepancies in their sharing patterns that can be timely exploited in order to rapidly detect misleading items from factual information."
],
[
"In this work we tackled the problem of the automatic classification of news articles in two domains, namely mainstream and disinformation news, with a language-independent approach which is based solely on the diffusion of news items on Twitter social platform. We disentangled different types of interactions on Twitter to accordingly build a multi-layer representation of news diffusion networks, and we computed a set of global network properties–separately for each layer–in order to encode each network with a tuple of features. Our goal was to investigate whether a multi-layer representation performs better than one layer BIBREF11, and to understand which of the features, observed at given layers, are most effective in the classification task.",
"Experiments with an off-the-shelf classifier such as Logistic Regression on datasets pertaining to two different media landscapes (US and Italy) yield very accurate classification results (AUROC up to 94%), even when accounting for the different political bias of news sources, which are far better than our baseline BIBREF11 with improvements up to 20%. Classification performances using single layers show that the layer of mentions alone entails better performance w.r.t other layers in both countries.",
"We also highlighted the most discriminative features across different layers in both countries; the results suggest that differences between the two news domains might be country-independent but rather due only to the typology of content shared, and that disinformation news shape broader and deeper cascades.",
"Additional experiments involving the temporal evolution of Twitter diffusion networks show that our methodology can accurate classify mainstream and disinformation news after a few hours of propagation on the platform.",
"Overall, our results prove that the topological features of multi-layer diffusion networks might be effectively exploited to detect online disinformation. We do not deny the presence of deceptive efforts to orchestrate the regular spread of information on social media via content amplification and manipulation BIBREF37BIBREF38. On the contrary, we postulate that such hidden forces might play to accentuate the discrepancies between the diffusion patterns of disinformation and mainstream news (and thus to make our methodology effective).",
"In the future we aim to further investigate three directions: (1) employ temporal networks to represent news diffusion and apply classification techniques that take into account the sequential aspect of data (e.g. recurrent neural networks); (2) carry out an extensive comparison of the diffusion of disinformation and mainstream news across countries to investigate deeper the presence of differences and similarities in sharing patterns; (3) leverage our network-based features in addition to state-of-the-art text-based approaches for \"fake news\" dete ction in order to deliver a real-world system to detect misleading and harmful information spreading on social media."
]
],
"section_name": [
"Introduction and related work",
"Methodology ::: Disinformation and mainstream news",
"Methodology ::: US dataset",
"Methodology ::: Italian dataset",
"Methodology ::: Building diffusion networks",
"Methodology ::: Global network properties",
"Methodology ::: Interpretation of network features and layers",
"Experiments ::: Setup",
"Experiments ::: Classification performances",
"Experiments ::: Layer importance analysis",
"Experiments ::: Feature importance analysis",
"Experiments ::: Temporal analysis",
"Conclusions"
]
} | {
"answers": [
{
"annotation_id": [
"37561ab4085c8f8f53710c60c4f7ab18b612ab46",
"5d68115b9308954cf86fe900a4f4d4a699150978"
],
"answer": [
{
"evidence": [
"We collected tweets associated to a dozen US mainstream news websites, i.e. most trusted sources described in BIBREF18, with the Streaming API, and we referred to Hoaxy API BIBREF16 for what concerns tweets containing links to 100+ US disinformation outlets. We filtered out articles associated to less than 50 tweets. The resulting dataset contains overall $\\sim $1.7 million tweets for mainstream news, collected in a period of three weeks (February 25th, 2019-March 18th, 2019), which are associated to 6,978 news articles, and $\\sim $1.6 million tweets for disinformation, collected in a period of three months (January 1st, 2019-March 18th, 2019) for sake of balance of the two classes, which hold 5,775 distinct articles. Diffusion censoring effects BIBREF14 were correctly taken into account in both collection procedures. We provide in Figure FIGREF4 the distribution of articles by source and political bias for both news domains.",
"For what concerns the Italian scenario we first collected tweets with the Streaming API in a 3-week period (April 19th, 2019-May 5th, 2019), filtering those containing URLs pointing to Italian official newspapers websites as described in BIBREF22; these correspond to the list provided by the association for the verification of newspaper circulation in Italy (Accertamenti Diffusione Stampa). We instead referred to the dataset provided by BIBREF23 to obtain a set of tweets, collected continuously since January 2019 using the same Twitter endpoint, which contain URLs to 60+ Italian disinformation websites. In order to get balanced classes (April 5th, 2019-May 5th, 2019), we retained data collected in a longer period w.r.t to mainstream news. In both cases we filtered out articles with less than 50 tweets; overall this dataset contains $\\sim $160k mainstream tweets, corresponding to 227 news articles, and $\\sim $100k disinformation tweets, corresponding to 237 news articles. We provide in Figure FIGREF5 the distribution of articles according to distinct sources for both news domains. As in the US dataset, we took into account censoring effects BIBREF14 by excluding tweets published before (left-censoring) or after two weeks (right-censoring) from the beginning of the collection process."
],
"extractive_spans": [],
"free_form_answer": "mainstream news and disinformation",
"highlighted_evidence": [
"We collected tweets associated to a dozen US mainstream news websites, i.e. most trusted sources described in BIBREF18, with the Streaming API, and we referred to Hoaxy API BIBREF16 for what concerns tweets containing links to 100+ US disinformation outlets. We filtered out articles associated to less than 50 tweets. The resulting dataset contains overall $\\sim $1.7 million tweets for mainstream news, collected in a period of three weeks (February 25th, 2019-March 18th, 2019), which are associated to 6,978 news articles, and $\\sim $1.6 million tweets for disinformation, collected in a period of three months (January 1st, 2019-March 18th, 2019) for sake of balance of the two classes, which hold 5,775 distinct articles. Diffusion censoring effects BIBREF14 were correctly taken into account in both collection procedures. We provide in Figure FIGREF4 the distribution of articles by source and political bias for both news domains.",
"For what concerns the Italian scenario we first collected tweets with the Streaming API in a 3-week period (April 19th, 2019-May 5th, 2019), filtering those containing URLs pointing to Italian official newspapers websites as described in BIBREF22; these correspond to the list provided by the association for the verification of newspaper circulation in Italy (Accertamenti Diffusione Stampa). We instead referred to the dataset provided by BIBREF23 to obtain a set of tweets, collected continuously since January 2019 using the same Twitter endpoint, which contain URLs to 60+ Italian disinformation websites. In order to get balanced classes (April 5th, 2019-May 5th, 2019), we retained data collected in a longer period w.r.t to mainstream news. In both cases we filtered out articles with less than 50 tweets; overall this dataset contains $\\sim $160k mainstream tweets, corresponding to 227 news articles, and $\\sim $100k disinformation tweets, corresponding to 237 news articles. We provide in Figure FIGREF5 the distribution of articles according to distinct sources for both news domains. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Sharing patterns in the two news domains exhibit discrepancies which might be country-independent and due to the content that is being shared.",
"In this work we tackled the problem of the automatic classification of news articles in two domains, namely mainstream and disinformation news, with a language-independent approach which is based solely on the diffusion of news items on Twitter social platform. We disentangled different types of interactions on Twitter to accordingly build a multi-layer representation of news diffusion networks, and we computed a set of global network properties–separately for each layer–in order to encode each network with a tuple of features. Our goal was to investigate whether a multi-layer representation performs better than one layer BIBREF11, and to understand which of the features, observed at given layers, are most effective in the classification task."
],
"extractive_spans": [
"mainstream and disinformation news"
],
"free_form_answer": "",
"highlighted_evidence": [
"Sharing patterns in the two news domains exhibit discrepancies which might be country-independent and due to the content that is being shared.",
"In this work we tackled the problem of the automatic classification of news articles in two domains, namely mainstream and disinformation news, with a language-independent approach which is based solely on the diffusion of news items on Twitter social platform."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"3b2f9e08570ef084ee041eec860712c94a816985",
"a749cb2d9553098e9dde1470815c5662d5c45852"
],
"answer": [
{
"evidence": [
"As it is reported that conservatives and liberals exhibit different behaviors on online social platforms BIBREF19BIBREF20BIBREF21, we further assigned a political bias label to different US outlets (and therefore news articles) following the procedure described in BIBREF2. In order to assess the robustness of our method, we performed classification experiments by training only on left-biased (or right-biased) outlets of both disinformation and mainstream domains and testing on the entire set of sources, as well as excluding particular sources that outweigh the others in terms of samples to avoid over-fitting."
],
"extractive_spans": [],
"free_form_answer": "By assigning a political bias label to each news article and training only on left-biased or right-biased outlets of both disinformation and mainstream domains",
"highlighted_evidence": [
"As it is reported that conservatives and liberals exhibit different behaviors on online social platforms BIBREF19BIBREF20BIBREF21, we further assigned a political bias label to different US outlets (and therefore news articles) following the procedure described in BIBREF2. In order to assess the robustness of our method, we performed classification experiments by training only on left-biased (or right-biased) outlets of both disinformation and mainstream domains and testing on the entire set of sources, as well as excluding particular sources that outweigh the others in terms of samples to avoid over-fitting."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We perform classification experiments with an off-the-shelf Logistic Regression model on two different datasets of mainstream and disinformation news shared on Twitter respectively in the United States and in Italy during 2019. In the former case we also account for political biases inherent to different news sources, referring to the procedure proposed in BIBREF2 to label different outlets. Overall we show that we are able to classify credible vs non-credible diffusion networks (and consequently news articles) with high accuracy (AUROC up to 94%), even when accounting for the political bias of sources (and training only on left-biased or right-biased articles). We observe that the layer of mentions alone conveys useful information for the classification, denoting a different usage of this functionality when sharing news belonging to the two news domains. We also show that most discriminative features, which are relative to the breadth and depth of largest cascades in different layers, are the same across the two countries."
],
"extractive_spans": [
"we also account for political biases inherent to different news sources, referring to the procedure proposed in BIBREF2 to label different outlets. Overall we show that we are able to classify credible vs non-credible diffusion networks (and consequently news articles) with high accuracy (AUROC up to 94%), even when accounting for the political bias of sources (and training only on left-biased or right-biased articles). We observe that the layer of mentions alone conveys useful information for the classification, denoting a different usage of this functionality when sharing news belonging to the two news domains. We also show that most discriminative features, which are relative to the breadth and depth of largest cascades in different layers, are the same across the two countries."
],
"free_form_answer": "",
"highlighted_evidence": [
"We perform classification experiments with an off-the-shelf Logistic Regression model on two different datasets of mainstream and disinformation news shared on Twitter respectively in the United States and in Italy during 2019. In the former case we also account for political biases inherent to different news sources, referring to the procedure proposed in BIBREF2 to label different outlets. Overall we show that we are able to classify credible vs non-credible diffusion networks (and consequently news articles) with high accuracy (AUROC up to 94%), even when accounting for the political bias of sources (and training only on left-biased or right-biased articles). We observe that the layer of mentions alone conveys useful information for the classification, denoting a different usage of this functionality when sharing news belonging to the two news domains. We also show that most discriminative features, which are relative to the breadth and depth of largest cascades in different layers, are the same across the two countries."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"3b164a8d0fe920ecdf9d97974b44df2383d7209e",
"6eb5d6b0800964140d23b4157e1eea007c9168d0"
],
"answer": [
{
"evidence": [
"Methodology ::: US dataset",
"We collected tweets associated to a dozen US mainstream news websites, i.e. most trusted sources described in BIBREF18, with the Streaming API, and we referred to Hoaxy API BIBREF16 for what concerns tweets containing links to 100+ US disinformation outlets. We filtered out articles associated to less than 50 tweets. The resulting dataset contains overall $\\sim $1.7 million tweets for mainstream news, collected in a period of three weeks (February 25th, 2019-March 18th, 2019), which are associated to 6,978 news articles, and $\\sim $1.6 million tweets for disinformation, collected in a period of three months (January 1st, 2019-March 18th, 2019) for sake of balance of the two classes, which hold 5,775 distinct articles. Diffusion censoring effects BIBREF14 were correctly taken into account in both collection procedures. We provide in Figure FIGREF4 the distribution of articles by source and political bias for both news domains.",
"Methodology ::: Italian dataset",
"For what concerns the Italian scenario we first collected tweets with the Streaming API in a 3-week period (April 19th, 2019-May 5th, 2019), filtering those containing URLs pointing to Italian official newspapers websites as described in BIBREF22; these correspond to the list provided by the association for the verification of newspaper circulation in Italy (Accertamenti Diffusione Stampa). We instead referred to the dataset provided by BIBREF23 to obtain a set of tweets, collected continuously since January 2019 using the same Twitter endpoint, which contain URLs to 60+ Italian disinformation websites. In order to get balanced classes (April 5th, 2019-May 5th, 2019), we retained data collected in a longer period w.r.t to mainstream news. In both cases we filtered out articles with less than 50 tweets; overall this dataset contains $\\sim $160k mainstream tweets, corresponding to 227 news articles, and $\\sim $100k disinformation tweets, corresponding to 237 news articles. We provide in Figure FIGREF5 the distribution of articles according to distinct sources for both news domains. As in the US dataset, we took into account censoring effects BIBREF14 by excluding tweets published before (left-censoring) or after two weeks (right-censoring) from the beginning of the collection process."
],
"extractive_spans": [
"US dataset",
"Italian dataset"
],
"free_form_answer": "",
"highlighted_evidence": [
"Methodology ::: US dataset\nWe collected tweets associated to a dozen US mainstream news websites, i.e. most trusted sources described in BIBREF18, with the Streaming API, and we referred to Hoaxy API BIBREF16 for what concerns tweets containing links to 100+ US disinformation outlets. ",
"Methodology ::: Italian dataset\nFor what concerns the Italian scenario we first collected tweets with the Streaming API in a 3-week period (April 19th, 2019-May 5th, 2019), filtering those containing URLs pointing to Italian official newspapers websites as described in BIBREF22; these correspond to the list provided by the association for the verification of newspaper circulation in Italy (Accertamenti Diffusione Stampa). We instead referred to the dataset provided by BIBREF23 to obtain a set of tweets, collected continuously since January 2019 using the same Twitter endpoint, which contain URLs to 60+ Italian disinformation websites."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Methodology ::: US dataset",
"We collected tweets associated to a dozen US mainstream news websites, i.e. most trusted sources described in BIBREF18, with the Streaming API, and we referred to Hoaxy API BIBREF16 for what concerns tweets containing links to 100+ US disinformation outlets. We filtered out articles associated to less than 50 tweets. The resulting dataset contains overall $\\sim $1.7 million tweets for mainstream news, collected in a period of three weeks (February 25th, 2019-March 18th, 2019), which are associated to 6,978 news articles, and $\\sim $1.6 million tweets for disinformation, collected in a period of three months (January 1st, 2019-March 18th, 2019) for sake of balance of the two classes, which hold 5,775 distinct articles. Diffusion censoring effects BIBREF14 were correctly taken into account in both collection procedures. We provide in Figure FIGREF4 the distribution of articles by source and political bias for both news domains.",
"Methodology ::: Italian dataset",
"For what concerns the Italian scenario we first collected tweets with the Streaming API in a 3-week period (April 19th, 2019-May 5th, 2019), filtering those containing URLs pointing to Italian official newspapers websites as described in BIBREF22; these correspond to the list provided by the association for the verification of newspaper circulation in Italy (Accertamenti Diffusione Stampa). We instead referred to the dataset provided by BIBREF23 to obtain a set of tweets, collected continuously since January 2019 using the same Twitter endpoint, which contain URLs to 60+ Italian disinformation websites. In order to get balanced classes (April 5th, 2019-May 5th, 2019), we retained data collected in a longer period w.r.t to mainstream news. In both cases we filtered out articles with less than 50 tweets; overall this dataset contains $\\sim $160k mainstream tweets, corresponding to 227 news articles, and $\\sim $100k disinformation tweets, corresponding to 237 news articles. We provide in Figure FIGREF5 the distribution of articles according to distinct sources for both news domains. As in the US dataset, we took into account censoring effects BIBREF14 by excluding tweets published before (left-censoring) or after two weeks (right-censoring) from the beginning of the collection process."
],
"extractive_spans": [
"US dataset",
"Italian dataset"
],
"free_form_answer": "",
"highlighted_evidence": [
"US dataset\nWe collected tweets associated to a dozen US mainstream news websites, i.e. most trusted sources described in BIBREF18, with the Streaming API, and we referred to Hoaxy API BIBREF16 for what concerns tweets containing links to 100+ US disinformation outlets. We filtered out articles associated to less than 50 tweets. The resulting dataset contains overall $\\sim $1.7 million tweets for mainstream news, collected in a period of three weeks (February 25th, 2019-March 18th, 2019), which are associated to 6,978 news articles, and $\\sim $1.6 million tweets for disinformation, collected in a period of three months (January 1st, 2019-March 18th, 2019) for sake of balance of the two classes, which hold 5,775 distinct articles. Diffusion censoring effects BIBREF14 were correctly taken into account in both collection procedures. We provide in Figure FIGREF4 the distribution of articles by source and political bias for both news domains.",
"Italian dataset\nFor what concerns the Italian scenario we first collected tweets with the Streaming API in a 3-week period (April 19th, 2019-May 5th, 2019), filtering those containing URLs pointing to Italian official newspapers websites as described in BIBREF22; these correspond to the list provided by the association for the verification of newspaper circulation in Italy (Accertamenti Diffusione Stampa). We instead referred to the dataset provided by BIBREF23 to obtain a set of tweets, collected continuously since January 2019 using the same Twitter endpoint, which contain URLs to 60+ Italian disinformation websites. In order to get balanced classes (April 5th, 2019-May 5th, 2019), we retained data collected in a longer period w.r.t to mainstream news. In both cases we filtered out articles with less than 50 tweets; overall this dataset contains $\\sim $160k mainstream tweets, corresponding to 227 news articles, and $\\sim $100k disinformation tweets, corresponding to 237 news articles. We provide in Figure FIGREF5 the distribution of articles according to distinct sources for both news domains. As in the US dataset, we took into account censoring effects BIBREF14 by excluding tweets published before (left-censoring) or after two weeks (right-censoring) from the beginning of the collection process."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"fa9a8870ae126cfc6a4a90af4f7d7d35638bd6eb"
],
"answer": [
{
"evidence": [
"We used a set of global network indicators which allow us to encode each network layer by a tuple of features. Then we simply concatenated tuples as to represent each multi-layer network with a single feature vector. We used the following global network properties:",
"Number of Strongly Connected Components (SCC): a Strongly Connected Component of a directed graph is a maximal (sub)graph where for each pair of vertices $u,v$ there is a path in each direction ($u\\rightarrow v$, $v\\rightarrow u$).",
"Size of the Largest Strongly Connected Component (LSCC): the number of nodes in the largest strongly connected component of a given graph.",
"Number of Weakly Connected Components (WCC): a Weakly Connected Component of a directed graph is a maximal (sub)graph where for each pair of vertices $(u, v)$ there is a path $u \\leftrightarrow v$ ignoring edge directions.",
"Size of the Largest Weakly Connected Component (LWCC): the number of nodes in the largest weakly connected component of a given graph.",
"Diameter of the Largest Weakly Connected Component (DWCC): the largest distance (length of the shortest path) between two nodes in the (undirected version of) largest weakly connected component of a graph.",
"Average Clustering Coefficient (CC): the average of the local clustering coefficients of all nodes in a graph; the local clustering coefficient of a node quantifies how close its neighbours are to being a complete graph (or a clique). It is computed according to BIBREF28.",
"Main K-core Number (KC): a K-core BIBREF13 of a graph is a maximal sub-graph that contains nodes of internal degree $k$ or more; the main K-core number is the highest value of $k$ (in directed graphs the total degree is considered).",
"Density (d): the density for directed graphs is $d=\\frac{|E|}{|V||V-1|}$, where $|E|$ is the number of edges and $|N|$ is the number of vertices in the graph; the density equals 0 for a graph without edges and 1 for a complete graph.",
"Structural virality of the largest weakly connected component (SV): this measure is defined in BIBREF14 as the average distance between all pairs of nodes in a cascade tree or, equivalently, as the average depth of nodes, averaged over all nodes in turn acting as a root; for $|V| > 1$ vertices, $SV=\\frac{1}{|V||V-1|}\\sum _i\\sum _j d_{ij}$ where $d_{ij}$ denotes the length of the shortest path between nodes $i$ and $j$. This is equivalent to compute the Wiener's index BIBREF29 of the graph and multiply it by a factor $\\frac{1}{|V||V-1|}$. In our case we computed it for the undirected equivalent graph of the largest weakly connected component, setting it to 0 whenever $V=1$."
],
"extractive_spans": [
"Number of Strongly Connected Components (SCC)",
"Size of the Largest Strongly Connected Component (LSCC)",
"Number of Weakly Connected Components (WCC)",
"Size of the Largest Weakly Connected Component (LWCC)",
"Diameter of the Largest Weakly Connected Component (DWCC)",
"Average Clustering Coefficient (CC)",
"Main K-core Number (KC)",
"Density (d)"
],
"free_form_answer": "",
"highlighted_evidence": [
"We used a set of global network indicators which allow us to encode each network layer by a tuple of features. Then we simply concatenated tuples as to represent each multi-layer network with a single feature vector. We used the following global network properties:\n\nNumber of Strongly Connected Components (SCC): a Strongly Connected Component of a directed graph is a maximal (sub)graph where for each pair of vertices $u,v$ there is a path in each direction ($u\\rightarrow v$, $v\\rightarrow u$).\n\nSize of the Largest Strongly Connected Component (LSCC): the number of nodes in the largest strongly connected component of a given graph.\n\nNumber of Weakly Connected Components (WCC): a Weakly Connected Component of a directed graph is a maximal (sub)graph where for each pair of vertices $(u, v)$ there is a path $u \\leftrightarrow v$ ignoring edge directions.\n\nSize of the Largest Weakly Connected Component (LWCC): the number of nodes in the largest weakly connected component of a given graph.\n\nDiameter of the Largest Weakly Connected Component (DWCC): the largest distance (length of the shortest path) between two nodes in the (undirected version of) largest weakly connected component of a graph.\n\nAverage Clustering Coefficient (CC): the average of the local clustering coefficients of all nodes in a graph; the local clustering coefficient of a node quantifies how close its neighbours are to being a complete graph (or a clique). It is computed according to BIBREF28.\n\nMain K-core Number (KC): a K-core BIBREF13 of a graph is a maximal sub-graph that contains nodes of internal degree $k$ or more; the main K-core number is the highest value of $k$ (in directed graphs the total degree is considered).\n\nDensity (d): the density for directed graphs is $d=\\frac{|E|}{|V||V-1|}$, where $|E|$ is the number of edges and $|N|$ is the number of vertices in the graph; the density equals 0 for a graph without edges and 1 for a complete graph.\n\nStructural virality of the largest weakly connected component (SV): this measure is defined in BIBREF14 as the average distance between all pairs of nodes in a cascade tree or, equivalently, as the average depth of nodes, averaged over all nodes in turn acting as a root; for $|V| > 1$ vertices, $SV=\\frac{1}{|V||V-1|}\\sum _i\\sum _j d_{ij}$ where $d_{ij}$ denotes the length of the shortest path between nodes $i$ and $j$. This is equivalent to compute the Wiener's index BIBREF29 of the graph and multiply it by a factor $\\frac{1}{|V||V-1|}$. In our case we computed it for the undirected equivalent graph of the largest weakly connected component, setting it to 0 whenever $V=1$."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"Which two news domains are country-independent?",
"How is the political bias of different sources included in the model?",
"What are the two large-scale datasets used?",
"What are the global network features which quantify different aspects of the sharing process?"
],
"question_id": [
"dd20d93166c14f1e57644cd7fa7b5e5738025cd0",
"dc2a2c177cd5df6da5d03e6e74262bf424850ec9",
"ae90c5567746fe25af2fcea0cc5f355751e05c71",
"d7644c674887ca9708eb12107acd964ae53b216d"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"search_query": [
"twitter",
"twitter",
"twitter",
"twitter"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: Distribution of the number of articles per source for US a) mainstream and b) disinformation news. Colors indicate the political bias label of each source.",
"Figure 2: Distribution of the number of articles per source for Italian a) mainstream and b) disinformation news.",
"Figure 3: An example of Twitter multi-layer diffusion network with four layers.",
"Table 2: Composition of the Italian dataset according to class and size.",
"Table 1: Composition of the US dataset according to class, size and political bias.",
"Table 3: Different evaluation metrics for the LR classifier (using a multi-layer approach) evaluated on different size classes of both the US and the Italian dataset.",
"Figure 4: AUROC values for the Balanced Random Forest classifier trained on left-biased (red) and right-biased (blue) news articles in the US dataset, and tested on the entire dataset. Error bars indicate the standard deviation of AUROC values over different folds of the cross validation.",
"Table 4: Comparison of performances of our multi-layer approach vs the baseline (single-layer). We show AUROC values for the LR classifier evaluated on different size classes of both US and IT datasets.",
"Table 5: Different evaluations metrics for LR classifier evaluated on different size classes of the US dataset and trained using features separately for each layer. Best scores for each row are written in bold.",
"Table 6: Top-5 most discriminative features according to χ2 test evaluated on both US and IT datasets (considering networks in the [0,+∞) size class).",
"Figure 5: AUROC values for the LR classifier (evaluated on different size classes of the US dataset) trained using different layers separately and aggregated (our multi-layer approach). Error bars indicate the standard deviation of AUROC values over different folds of the cross validation.",
"Figure 6: AUROC values for LR classifier evaluated on US (blue) and IT (green) dataset, considering different lifetimes (or spreading duration) for multi-layer networks. Error bars indicate standard deviations of 10-fold cross validation."
],
"file": [
"3-Figure1-1.png",
"3-Figure2-1.png",
"4-Figure3-1.png",
"5-Table2-1.png",
"5-Table1-1.png",
"6-Table3-1.png",
"6-Figure4-1.png",
"6-Table4-1.png",
"7-Table5-1.png",
"7-Table6-1.png",
"7-Figure5-1.png",
"8-Figure6-1.png"
]
} | [
"Which two news domains are country-independent?",
"How is the political bias of different sources included in the model?"
] | [
[
"2002.12612-Methodology ::: Italian dataset-0",
"2002.12612-Methodology ::: US dataset-0",
"2002.12612-Conclusions-0",
"2002.12612-Experiments ::: Feature importance analysis-5"
],
[
"2002.12612-Introduction and related work-7",
"2002.12612-Methodology ::: US dataset-1"
]
] | [
"mainstream news and disinformation",
"By assigning a political bias label to each news article and training only on left-biased or right-biased outlets of both disinformation and mainstream domains"
] | 164 |
1908.05441 | Multi-class Hierarchical Question Classification for Multiple Choice Science Exams | Prior work has demonstrated that question classification (QC), recognizing the problem domain of a question, can help answer it more accurately. However, developing strong QC algorithms has been hindered by the limited size and complexity of annotated data available. To address this, we present the largest challenge dataset for QC, containing 7,787 science exam questions paired with detailed classification labels from a fine-grained hierarchical taxonomy of 406 problem domains. We then show that a BERT-based model trained on this dataset achieves a large (+0.12 MAP) gain compared with previous methods, while also achieving state-of-the-art performance on benchmark open-domain and biomedical QC datasets. Finally, we show that using this model's predictions of question topic significantly improves the accuracy of a question answering system by +1.7% P@1, with substantial future gains possible as QC performance improves. | {
"paragraphs": [
[
"Understanding what a question is asking is one of the first steps that humans use to work towards an answer. In the context of question answering, question classification allows automated systems to intelligently target their inference systems to domain-specific solvers capable of addressing specific kinds of questions and problem solving methods with high confidence and answer accuracy BIBREF0 , BIBREF1 .",
"To date, question classification has primarily been studied in the context of open-domain TREC questions BIBREF2 , with smaller recent datasets available in the biomedical BIBREF3 , BIBREF4 and education BIBREF5 domains. The open-domain TREC question corpus is a set of 5,952 short factoid questions paired with a taxonomy developed by Li and Roth BIBREF6 that includes 6 coarse answer types (such as entities, locations, and numbers), and 50 fine-grained types (e.g. specific kinds of entities, such as animals or vehicles). While a wide variety of syntactic, semantic, and other features and classification methods have been applied to this task, culminating in near-perfect classification performance BIBREF7 , recent work has demonstrated that QC methods developed on TREC questions generally fail to transfer to datasets with more complex questions such as those in the biomedical domain BIBREF3 , likely due in part to the simplicity and syntactic regularity of the questions, and the ability for simpler term-frequency models to achieve near-ceiling performance BIBREF8 . In this work we explore question classification in the context of multiple choice science exams. Standardized science exams have been proposed as a challenge task for question answering BIBREF9 , as most questions contain a variety of challenging inference problems BIBREF10 , BIBREF11 , require detailed scientific and common-sense knowledge to answer and explain the reasoning behind those answers BIBREF12 , and questions are often embedded in complex examples or other distractors. Question classification taxonomies and annotation are difficult and expensive to generate, and because of the unavailability of this data, to date most models for science questions use one or a small number of generic solvers that perform little or no question decomposition BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 , BIBREF17 . Our long-term interest is in developing methods that intelligently target their inferences to generate both correct answers and compelling human-readable explanations for the reasoning behind those answers. The lack of targeted solving – using the same methods for inferring answers to spatial questions about planetary motion, chemical questions about photosynthesis, and electrical questions about circuit continuity – is a substantial barrier to increasing performance (see Figure FIGREF1 ).",
"To address this need for developing methods of targetted inference, this work makes the following contributions:"
],
[
"Question classification typically makes use of a combination of syntactic, semantic, surface, and embedding methods. Syntactic patterns BIBREF18 , BIBREF19 , BIBREF20 , BIBREF21 and syntactic dependencies BIBREF3 have been shown to improve performance, while syntactically or semantically important words are often expanding using Wordnet hypernyms or Unified Medical Language System categories (for the medical domain) to help mitigate sparsity BIBREF22 , BIBREF23 , BIBREF24 . Keyword identification helps identify specific terms useful for classification BIBREF25 , BIBREF3 , BIBREF26 . Similarly, named entity recognizers BIBREF6 , BIBREF27 or lists of semantically related words BIBREF6 , BIBREF24 can also be used to establish broad topics or entity categories and mitigate sparsity, as can word embeddings BIBREF28 , BIBREF29 . Here, we empirically demonstrate many of these existing methods do not transfer to the science domain.",
"The highest performing question classification systems tend to make use of customized rule-based pattern matching BIBREF30 , BIBREF7 , or a combination of rule-based and machine learning approaches BIBREF19 , at the expense of increased model construction time. A recent emphasis on learned methods has shown a large set of CNN BIBREF29 and LSTM BIBREF8 variants achieve similar accuracy on TREC question classification, with these models exhibiting at best small gains over simple term frequency models. These recent developments echo the observations of Roberts et al. BIBREF3 , who showed that existing methods beyond term frequency models failed to generalize to medical domain questions. Here we show that strong performance across multiple datasets is possible using a single learned model.",
"Due to the cost involved in their construction, question classification datasets and classification taxonomies tend to be small, which can create methodological challenges. Roberts et al. BIBREF3 generated the next-largest dataset from TREC, containing 2,936 consumer health questions classified into 13 question categories. More recently, Wasim et al. BIBREF4 generated a small corpus of 780 biomedical domain questions organized into 88 categories. In the education domain, Godea et al. BIBREF5 collected a set of 1,155 classroom questions and organized these into 16 categories. To enable a detailed study of science domain question classification, here we construct a large-scale challenge dataset that exceeds the size and classification specificity of other datasets, in many cases by nearly an order of magnitude."
],
[
"Questions: We make use of the 7,787 science exam questions of the Aristo Reasoning Challenge (ARC) corpus BIBREF31 , which contains standardized 3rd to 9th grade science questions from 12 US states from the past decade. Each question is a 4-choice multiple choice question. Summary statistics comparing the complexity of ARC and TREC questions are shown in Table TABREF5 .",
"Taxonomy: Starting with the syllabus for the NY Regents exam, we identified 9 coarse question categories (Astronomy, Earth Science, Energy, Forces, Life Science, Matter, Safety, Scientific Method, Other), then through a data-driven analysis of 3 exam study guides and the 3,370 training questions, expanded the taxonomy to include 462 fine-grained categories across 6 hierarchical levels of granularity. The taxonomy is designed to allow categorizing questions into broad curriculum topics at it's coarsest level, while labels at full specificity separate questions into narrow problem domains suitable for targetted inference methods. Because of its size, a subset of the classification taxonomy is shown in Table TABREF6 , with the full taxonomy and class definitions included in the supplementary material.",
"Annotation: Because of the complexity of the questions, it is possible for one question to bridge multiple categories – for example, a wind power generation question may span both renewable energy and energy conversion. We allow up to 2 labels per question, and found that 16% of questions required multiple labels. Each question was independently annotated by two annotators, with the lead annotator a domain expert in standardized exams. Annotators first independently annotated the entire question set, then questions without complete agreement were discussed until resolution. Before resolution, interannotator agreement (Cohen's Kappa) was INLINEFORM0 = 0.58 at the finest level of granularity, and INLINEFORM1 = 0.85 when considering only the coarsest 9 categories. This is considered moderate to strong agreement BIBREF32 . Based on the results of our error analysis (see Section SECREF21 ), we estimate the overall accuracy of the question classification labels after resolution to be approximately 96%. While the full taxonomy contains 462 fine-grained categories derived from both standardized questions, study guides, and exam syllabi, we observed only 406 of these categories are tested in the ARC question set."
],
[
"We identified 5 common models in previous work primarily intended for learned classifiers rather than hand-crafted rules. We adapt these models to a multi-label hierarchical classification task by training a series of one-vs-all binary classifiers BIBREF34 , one for each label in the taxonomy. With the exception of the CNN and BERT models, following previous work BIBREF19 , BIBREF3 , BIBREF8 we make use of an SVM classifier using the LIBSvM framework BIBREF35 with a linear kernel. Models are trained and evaluated from coarse to fine levels of taxonomic specificity. At each level of taxonomic evaluation, a set of non-overlapping confidence scores for each binary classifier are generated and sorted to produce a list of ranked label predictions. We evaluate these ranks using Mean Average Precision BIBREF36 . ARC questions are evaluated using the standard 3,370 questions for training, 869 for development, and 3,548 for testing.",
"N-grams, POS, Hierarchical features: A baseline bag-of-words model incorporating both tagged and untagged unigrams and bigams. We also implement the hierarchical classification feature of Li and Roth BIBREF6 , where for a given question, the output of the classifier at coarser levels of granularity serves as input to the classifier at the current level of granularity.",
"Dependencies: Bigrams of Stanford dependencies BIBREF37 . For each word, we create one unlabeled bigram for each outgoing link from that word to it's dependency BIBREF20 , BIBREF3 .",
"Question Expansion with Hypernyms: We perform hypernym expansion BIBREF22 , BIBREF19 , BIBREF3 by including WordNet hypernyms BIBREF38 for the root dependency word, and words on it's direct outgoing links. WordNet sense is identified using Lesk word-sense disambiguation BIBREF39 , using question text for context. We implement the heuristic of Van-tu et al. BIBREF24 , where more distant hypernyms receive less weight.",
"Essential Terms: Though not previously reported for QC, we make use of unigrams of keywords extracted using the Science Exam Essential Term Extractor of Khashabi et al. BIBREF26 . For each keyword, we create one binary unigram feature.",
"CNN: Kim BIBREF28 demonstrated near state-of-the-art performance on a number of sentence classification tasks (including TREC question classification) by using pre-trained word embeddings BIBREF40 as feature extractors in a CNN model. Lei et al. BIBREF29 showed that 10 CNN variants perform within +/-2% of Kim's BIBREF28 model on TREC QC. We report performance of our best CNN model based on the MP-CNN architecture of Rao et al. BIBREF41 , which works to establish the similarity between question text and the definition text of the question classes. We adapt the MP-CNN model, which uses a “Siamese” structure BIBREF33 , to create separate representations for both the question and the question class. The model then makes use of a triple ranking loss function to minimize the distance between the representations of questions and the correct class while simultaneously maximising the distance between questions and incorrect classes. We optimize the network using the method of Tu BIBREF42 .",
"BERT-QC (This work): We make use of BERT BIBREF43 , a language model using bidirectional encoder representations from transformers, in a sentence-classification configuration. As the original settings of BERT do not support multi-label classification scenarios, and training a series of 406 binary classifiers would be computationally expensive, we use the duplication method of Tsoumakas et al. BIBREF34 where we enumerate multi-label questions as multiple single-label instances during training by duplicating question text, and assigning each instance one of the multiple labels. Evaluation follows the standard procedure where we generate a list of ranked class predictions based on class probabilities, and use this to calculate Mean Average Precision (MAP) and Precision@1 (P@1). As shown in Table TABREF7 , this BERT-QC model achieves our best question classification performance, significantly exceeding baseline performance on ARC by 0.12 MAP and 13.5% P@1."
],
[
"Apart from term frequency methods, question classification methods developed on one dataset generally do not exhibit strong transfer performance to other datasets BIBREF3 . While BERT-QC achieves large gains over existing methods on the ARC dataset, here we demonstrate that BERT-QC also matches state-of-the-art performance on TREC BIBREF6 , while surpassing state-of-the-art performance on the GARD corpus of consumer health questions BIBREF3 and MLBioMedLAT corpus of biomedical questions BIBREF4 . As such, BERT-QC is the first model to achieve strong performance across more than one question classification dataset.",
"TREC question classification is divided into separate coarse and fine-grained tasks centered around inferring the expected answer types of short open-domain factoid questions. TREC-6 includes 6 coarse question classes (abbreviation, entity, description, human, location, numeric), while TREC-50 expands these into 50 more fine-grained types. TREC question classification methods can be divided into those that learn the question classification task, and those that make use of either hand-crafted or semi-automated syntactic or semantic extraction rules to infer question classes. To date, the best reported accuracy for learned methods is 98.0% by Xia et al. BIBREF8 for TREC-6, and 91.6% by Van-tu et al. BIBREF24 for TREC-50. Madabushi et al. BIBREF7 achieve the highest to-date performance on TREC-50 at 97.2%, using rules that leverage the strong syntactic regularities in the short TREC factoid questions.",
"We compare the performance of BERT-QC with recently reported performance on this dataset in Table TABREF11 . BERT-QC achieves state-of-the-art performance on fine-grained classification (TREC-50) for a learned model at 92.0% accuracy, and near state-of-the-art performance on coarse classification (TREC-6) at 96.2% accuracy.",
"Because of the challenges with collecting biomedical questions, the datasets and classification taxonomies tend to be small, and rule-based methods often achieve strong results BIBREF45 . Roberts et al. BIBREF3 created the largest biomedical question classification dataset to date, annotating 2,937 consumer health questions drawn from the Genetic and Rare Diseases (GARD) question database with 13 question types, such as anatomy, disease cause, diagnosis, disease management, and prognoses. Roberts et al. BIBREF3 found these questions largely resistant to learning-based methods developed for TREC questions. Their best model (CPT2), shown in Table TABREF17 , makes use of stemming and lists of semantically related words and cue phrases to achieve 80.4% accuracy. BERT-QC reaches 84.9% accuracy on this dataset, an increase of +4.5% over the best previous model. We also compare performance on the recently released MLBioMedLAT dataset BIBREF4 , a multi-label biomedical question classification dataset with 780 questions labeled using 88 classification types drawn from 133 Unified Medical Language System (UMLS) categories. Table TABREF18 shows BERT-QC exceeds their best model, focus-driven semantic features (FDSF), by +0.05 Micro-F1 and +3% accuracy."
],
[
"We performed an error analysis on 50 ARC questions where the BERT-QC system did not predict the correct label, with a summary of major error categories listed in Table TABREF20 .",
"Associative Errors: In 35% of cases, predicted labels were nearly correct, differing from the correct label only by the finest-grained (leaf) element of the hierarchical label (for example, predicting Matter INLINEFORM0 Changes of State INLINEFORM1 Boiling instead of Matter INLINEFORM2 Changes of State INLINEFORM3 Freezing). The bulk of the remaining errors were due to questions containing highly correlated words with a different class, or classes themselves being highly correlated. For example, a specific question about Weather Models discusses “environments” changing over “millions of years”, where discussions of environments and long time periods tend to be associated with questions about Locations of Fossils. Similarly, a question containing the word “evaporation” could be primarily focused on either Changes of State or the Water Cycle (cloud generation), and must rely on knowledge from the entire question text to determine the correct problem domain. We believe these associative errors are addressable technical challenges that could ultimately lead to increased performance in subsequent models.",
"Errors specific to the multiple-choice domain: We observed that using both question and all multiple choice answer text produced large gains in question classification performance – for example, BERT-QC performance increases from 0.516 (question only) to 0.654 (question and all four answer candidates), an increase of 0.138 MAP. Our error analysis observed that while this substantially increases QC performance, it changes the distribution of errors made by the system. Specifically, 25% of errors become highly correlated with an incorrect answer candidate, which (we show in Section SECREF5 ) can reduce the performance of QA solvers."
],
[
"Because of the challenges of errorful label predictions correlating with incorrect answers, it is difficult to determine the ultimate benefit a QA model might receive from reporting QC performance in isolation. Coupling QA and QC systems can often be laborious – either a large number of independent solvers targeted to specific question types must be constructed BIBREF46 , or an existing single model must be able to productively incorporate question classification information. Here we demostrate the latter – that a BERT QA model is able to incorporate question classification information through query expansion. BERT BIBREF43 recently demonstrated state-of-the-art performance on benchmark question answering datasets such as SQUaD BIBREF47 , and near human-level performance on SWAG BIBREF48 . Similarly, Pan et al. BIBREF49 demonstrated that BERT achieves the highest accuracy on the most challenging subset of ARC science questions. We make use of a BERT QA model using the same QA paradigm described by Pan et al. BIBREF49 , where QA is modeled as a next-sentence prediction task that predicts the likelihood of a given multiple choice answer candidate following the question text. We evaluate the question text and the text of each multiple choice answer candidate separately, where the answer candidate with the highest probablity is selected as the predicted answer for a given question. Performance is evaluated using Precision@1 BIBREF36 . Additional model details and hyperparameters are included in the Appendix.",
"We incorporate QC information into the QA process by implementing a variant of a query expansion model BIBREF50 . Specifically, for a given {question, QC_label} pair, we expand the question text by concatenating the definition text of the question classification label to the start of the question. We use of the top predicted question classification label for each question. Because QC labels are hierarchical, we append the label definition text for each level of the label INLINEFORM0 . An exampe of this process is shown in Table TABREF23 .",
"Figure FIGREF24 shows QA peformance using predicted labels from the BERT-QC model, compared to a baseline model that does not contain question classification information. As predicted by the error analysis, while a model trained with question and answer candidate text performs better at QC than a model using question text alone, a large proportion of the incorrect predictions become associated with a negative answer candidate, reducing overall QA performance, and highlighting the importance of evaluating QC and QA models together. When using BERT-QC trained on question text alone, at the finest level of specificity (L6) where overall question classification accuracy is 57.8% P@1, question classification significantly improves QA performance by +1.7% P@1 INLINEFORM0 . Using gold labels shows ceiling QA performance can reach +10.0% P@1 over baseline, demonstrating that as question classification model performance improves, substantial future gains are possible. An analysis of expected gains for a given level of QC performance is included in the Appendix, showing approximately linear gains in QA performance above baseline for QC systems able to achieve over 40% classification accuracy. Below this level, the decreased performance from noise induced by incorrect labels surpasses gains from correct labels.",
"Hyperparameters: Pilot experiments on both pre-trained BERT-Base and BERT-Large checkpoints showed similar performance benefits at the finest levels of question classification granularity (L6), but the BERT-Large model demonstrated higher overall baseline performance, and larger incremental benefits at lower levels of QC granularity, so we evaluated using that model. We lightly tuned hyperparameters on the development set surrounding those reported by Devlin et al. BIBREF43 , and ultimately settled on parameters similar to their original work, tempered by technical limitations in running the BERT-Large model on available hardware: maximum sequence length = 128, batch size = 16, learning rate: 1e-5. We report performance as the average of 10 runs for each datapoint. The number of epochs were tuned on each run on the development set (to a maximum of 8 epochs), where most models converged to maximum performance within 5 epochs.",
"Preference for uncorrelated errors in multiple choice question classification: We primarily report QA performance using BERT-QC trained using text from only the multiple choice questions and not their answer candidates. While this model achieved lower overall QC performance compared to the model trained with both question and multiple choice answer candidate text, it achieved slightly higher performance in the QA+QC setting. Our error analysis in Section SECREF21 shows that though models trained on both question and answer text can achieve higher QC performance, when they make QC errors, the errors tend to be highly correlated with an incorrect answer candidate, which can substantially reduce QA performance. This is an important result for question classification in the context of multiple choice exams.In the context of multiple choice exams, correlated noise can substantially reduce QA performance, meaning the kinds of errors that a model makes are important, and evaluating QC performance in context with QA models that make use of those QC systems is critical.",
"Related to this result, we provide an analysis of the noise sensitivity of the QA+QC model for different levels of question classification prediction accuracy. Here, we perturb gold question labels by randomly selecting a proportion of questions (between 5% and 40%) and randomly assigning that question a different label. Figure FIGREF36 shows that this uncorrelated noise provides roughly linear decreases in performance, and still shows moderate gains at 60% accuracy (40% noise) with uncorrelated noise. This suggests that when making errors, making random errors (that are not correlated to incorrect multiple choice answers) is preferential.",
"Training with predicted labels: We observed small gains when training the BERT-QA model with predicted QC labels. We generate predicted labels for the training set using 5-fold crossvalidation over only the training questions.",
"Statistics: We use non-parametric bootstrap resampling to compare baseline (no label) and experimental (QC labeled) runs of the QA+QC experiment. Because the BERT-QA model produces different performance values across successive runs, we perform 10 runs of each condition. We then compute pairwise p-values for each of the 10 no label and QC labeled runs (generating 100 comparisons), then use Fisher's method to combine these into a final statistic.",
"Question classification paired with question answering shows statistically significant gains of +1.7% P@1 at L6 using predicted labels, and a ceiling gain of up to +10% P@1 using gold labels. The QA performance graph in Figure FIGREF24 contains two deviations from the expectation of linear gains with increasing specificity, at L1 and L3. Region at INLINEFORM0 On gold labels, L3 provides small gains over L2, where as L4 provides large gains over L3. We hypothesize that this is because approximately 57% of question labels belong to the Earth Science or Life Science categories which have much more depth than breadth in the standardized science curriculum, and as such these categories are primarily differentiated from broad topics into detailed problem types at levels L4 through L6. Most other curriculum categories have more breadth than depth, and show strong (but not necessarily full) differentiation at L2. Region at INLINEFORM1 Predicted performance at L1 is higher than gold performance at L1. We hypothesize this is because we train using predicted rather than gold labels, which provides a boost in performance. Training on gold labels and testing on predicted labels substantially reduces the difference between gold and predicted performance.",
"Though initial raw interannotator agreement was measured at INLINEFORM0 , to maximize the quality of the annotation the annotators performed a second pass where all disagreements were manually resolved. Table TABREF30 shows question classification performance of the BERT-QC model at 57.8% P@1, meaning 42.2% of the predicted labels were different than the gold labels. The question classification error analysis in Table TABREF20 found that of these 42.2% of errorful predictions, 10% of errors (4.2% of total labels) were caused by the gold labels being incorrect. This allows us to estimate that the overall quality of the annotation (the proportion of questions that have a correct human authored label) is approximately 96%."
],
[
"Detailed error analyses for question answering systems are typically labor intensive, often requiring hours or days to perform manually. As a result error analyses are typically completed infrequently, in spite of their utility to key decisions in the algortithm or knowledge construction process. Here we show having access to detailed question classification labels specifying fine-grained problem domains provides a mechanism to automatically generate error analyses in seconds instead of days.",
"To illustrate the utility of this approach, Table TABREF26 shows the performance of the BERT QA+QC model broken down by specific question classes. This allows automatically identifying a given model's strengths – for example, here questions about Human Health, Material Properties, and Earth's Inner Core are well addressed by the BERT-QA model, and achieve well above the average QA performance of 49%. Similarly, areas of deficit include Changes of State, Reproduction, and Food Chain Processes questions, which see below-average QA performance. The lowest performing class, Safety Procedures, demonstrates that while this model has strong performance in many areas of scientific reasoning, it is worse than chance at answering questions about safety, and would be inappropriate to deploy for safety-critical tasks.",
"While this analysis is shown at an intermediate (L2) level of specificity for space, more detailed analyses are possible. For example, overall QA performance on Scientific Inference questions is near average (47%), but increasing granularity to L3 we observe that questions addressing Experiment Design or Making Inferences – challenging questions even for humans – perform poorly (33% and 20%) when answered by the QA system. This allows a system designer to intelligently target problem-specific knowledge resources and inference methods to address deficits in specific areas."
],
[
"Question classification can enable targetting question answering models, but is challenging to implement with high performance without using rule-based methods. In this work we generate the most fine-grained challenge dataset for question classification, using complex and syntactically diverse questions, and show gains of up to 12% are possible with our question classification model across datasets in open, science, and medical domains. This model is the first demonstration of a question classification model achieving state-of-the-art results across benchmark datasets in open, science, and medical domains. We further demonstrate attending to question type can significantly improve question answering performance, with large gains possible as quesion classification performance improves. Our error analysis suggests that developing high-precision methods of question classification independent of their recall can offer the opportunity to incrementally make use of the benefits of question classification without suffering the consequences of classification errors on QA performance."
],
[
"Our Appendix and supplementary material (available at http://www.cognitiveai.org/explanationbank/) includes data, code, experiment details, and negative results."
],
[
"The authors wish to thank Elizabeth Wainwright and Stephen Marmorstein for piloting an earlier version of the question classification annotation. We thank the Allen Insitute for Artificial Intelligence and National Science Founation (NSF 1815948 to PJ) for funding this work."
],
[
"Classification Taxonomy: The full classification taxonomy is included in separate files, both coupled with definitions, and as a graphical visualization.",
"Annotation Procedure: Primary annotation took place over approximately 8 weeks. Annotators were instructed to provide up to 2 labels from the full classification taxonomy (462 labels) that were appropriate for each question, and to provide the most specific label available in the taxonomy for a given question. Of the 462 labels in the classification taxonomy, the ARC questions had non-zero counts in 406 question types. Rarely, questions were encountered by annotators that did not clearly fit into a label at the end of the taxonomy, and in these cases the annotators were instructed to choose a more generic label higher up the taxonomy that was appropriate. This occurred when the production taxonomy failed to have specific categories for infrequent questions testing knowledge that is not a standard part of the science curriculum. For example, the question: ",
"Which material is the best natural resource to use for making water-resistant shoes? (A) cotton (B) leather (C) plastic (D) wool",
"tests a student's knowledge of the water resistance of different materials. Because this is not a standard part of the curriculum, and wasn't identified as a common topic in the training questions, the annotators tag this question as belonging to Matter INLINEFORM0 Properties of Materials, rather than a more specific category.",
"Questions from the training, development, and test sets were randomly shuffled to counterbalance any learning effects during the annotation procedure, but were presented in grade order (3rd to 9th grade) to reduce context switching (a given grade level tends to use a similar subset of the taxonomy – for example, 3rd grade questions generally do not address Chemical Equations or Newtons 1st Law of Motion).",
"Interannotator Agreement: To increase quality and consistency, each annotator annotated the entire dataset of 7,787 questions. Two annotators were used, with the lead annotator possessing previous professional domain expertise. Annotation proceeded in a two-stage process, where in stage 1 annotators completed their annotation independently, and in stage 2 each of the questions where the annotators did not have complete agreement were manually resolved by the annotators, resulting in high-quality classification annotation.",
"Because each question can have up to two labels, we treat each label for a given question as a separate evaluation of interannotator agreement. That is, for questions where both annotators labeled each question as having 1 or 2 labels, we treat this as 1 or 2 separate evaluations of interannotator agreement. For cases where one annotator labeled as question as having 1 label, and the other annotator labeled that same question as having 2 labels, we conservatively treat this as two separate interannotator agreements where one annotator failed to specify the second label and had zero agreement on that unspecified label.",
"Though the classification procedure was fine-grained compared to other question classification taxonomies, containing an unusually large number of classes (406), overall raw interannotator agreement before resolution was high (Cohen's INLINEFORM0 = 0.58). When labels are truncated to a maximum taxonomy depth of N, raw interannotator increases to INLINEFORM1 = 0.85 at the coarsest (9 class) level (see Table TABREF28 ). This is considered moderate to strong agreement (see McHugh BIBREF32 for a discussion of the interpretation of the Kappa statistic). Based on the results of an error analysis on the question classification system (see Section UID38 ), we estimate that the overall accuracy of the question classification labels after resolution is approximately 96% .",
"Annotators disagreed on 3441 (44.2%) of questions. Primary sources of disagreement before resolution included each annotator choosing a single category for questions requiring multiple labels (e.g. annotator 1 assigning a label of X, and annotator 2 assigning a label of Y, when the gold label was multilabel X, Y), which was observed in 18% of disagreements. Similarly, we observed annotators choosing similar labels but at different levels of specificity in the taxonomy (e.g. annotator 1 assigning a label of Matter INLINEFORM0 Changes of State INLINEFORM1 Boiling, where annotator 2 assigned Matter INLINEFORM2 Changes of State), which occurred in 12% of disagreements before resolution."
],
[
"Because of space limitations the question classification results are reported in Table TABREF7 only using Mean Average Precision (MAP). We also include Precision@1 (P@1), the overall accuracy of the highest-ranked prediction for each question classification model, in Table TABREF30 .",
"CNN: We implemented the CNN sentence classifier of Kim BIBREF28 , which demonstrated near state-of-the-art performance on a number of sentence classification tasks (including TREC question classification) by using pre-trained word embeddings BIBREF40 as feature extractors in a CNN model. We adapted the original CNN non-static model to multi-label classification by changing the fully connected softmax layer to sigmoid layer to produce a sigmoid output for each label simultaneously. We followed the same parameter settings reported by Kim et al. except the learning rate, which was tuned based on the development set. Pilot experiments did not show a performance improvement over the baseline model.",
"Label Definitions: Question terms can be mapped to categories using manual heuristics BIBREF19 . To mitigate sparsity and limit heuristic use, here we generated a feature comparing the cosine similarity of composite embedding vectors BIBREF51 representing question text and category definition text, using pretrained GloVe embeddings BIBREF52 . Pilot experiments showed that performance did not significantly improve.",
"Question Expansion with Hypernyms (Probase Version): One of the challenges of hypernym expansion BIBREF22 , BIBREF19 , BIBREF3 is determining a heuristic for the termination depth of hypernym expansion, as in Van-tu et al. BIBREF24 . Because science exam questions are often grounded in specific examples (e.g. a car rolling down a hill coming to a stop due to friction), we hypothesized that knowing certain categories of entities can be important for identifying specific question types – for example, observing that a question contains a kind of animal may be suggestive of a Life Science question, where similarly vehicles or materials present in questions may suggest questions about Forces or Matter, respectively. The challenge with WordNet is that key hypernyms can be at very different depths from query terms – for example, “cat” is distance 10 away from living thing, “car” is distance 4 away from vehicle, and “copper” is distance 2 away from material. Choosing a static threshold (or decaying threshold, as in Van-tu et al. BIBREF24 ) will inheriently reduce recall and limit the utility of this method of query expansion.",
"To address this, we piloted a hypernym expansion experiment using the Probase taxonomy BIBREF53 , a collection of 20.7M is-a pairs mined from the web, in place of WordNet. Because the taxonomic pairs in Probase come from use in naturalistic settings, links tend to jump levels in the WordNet taxonomy and be expressed in common forms. For example, INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 , are each distance 1 in the Probase taxonomy, and high-frequency (i.e. high-confidence) taxonomic pairs.",
"Similar to query expansion using WordNet Hypernyms, our pilot experiments did not observe a benefit to using Probase hypernyms over the baseline model. An error analysis suggested that the large number of noisy and out-of-context links present in Probase may have reduced performance, and in response we constructed a filtered list of 710 key hypernym categories manually filtered from a list of hypernyms seeded using high-frequency words from an in-house corpus of 250 in-domain science textbooks. We also did not observe a benefit to question classification over the baseline model when expanding only to this manually curated list of key hypernyms.",
"Topic words: We made use of the 77 TREC word lists of Li and Roth BIBREF6 , containing a total of 3,257 terms, as well as an in-house set of 144 word lists on general and elementary science topics mined from the web, such as ANIMALS, VEGETABLES, and VEHICLES, containing a total of 29,059 words. To mitigate sparsity, features take the form of counts for a specific topic – detecting the words turtle and giraffe in a question would provide a count of 2 for the ANIMAL feature. This provides a light form of domain-specific entity and action (e.g. types of changes) recognition. Pilot experiments showed that this wordlist feature did add a modest performance benefit of approximately 2% to question classification accuracy. Taken together with our results on hypernym expansion, this suggests that manually curated wordlists can show modest benefits for question classification performance, but at the expense of substantial effort in authoring or collecting these extensive wordlists.",
"Hyperparameters: For each layer of the class label hierarchy, we tune the hyperparameters based on the development set. We use the pretrained BERT-Base (uncased) checkpoint. We use the following hyperparameters: maximum sequence length = 256, batch size = 16, learning rates: 2e-5 (L1), 5e-5 (L2-L6), epochs: 5 (L1), 25 (L2-L6).",
"Statistics: We use non-parametric bootstrap resampling to compare the baseline (Li and Roth BIBREF6 model) to all experimental models to determine significance, using 10,000 bootstrap resamples."
]
],
"section_name": [
"Introduction",
"Related work",
"Questions and Classification Taxonomy",
"Question Classification on Science Exams",
"Comparison with Benchmark Datasets",
"Error Analysis",
"Question Answering with QC Labels",
"Automating Error Analyses with QC",
"Conclusion",
"Resources",
"Acknowledgements",
"Annotation",
"Question Classification"
]
} | {
"answers": [
{
"annotation_id": [
"b2bfdbe78ae45025cdc59a6493aaf091a54f91a7",
"f78bb60801709077f33f708f485c3106f594ab5e"
],
"answer": [
{
"evidence": [
"Apart from term frequency methods, question classification methods developed on one dataset generally do not exhibit strong transfer performance to other datasets BIBREF3 . While BERT-QC achieves large gains over existing methods on the ARC dataset, here we demonstrate that BERT-QC also matches state-of-the-art performance on TREC BIBREF6 , while surpassing state-of-the-art performance on the GARD corpus of consumer health questions BIBREF3 and MLBioMedLAT corpus of biomedical questions BIBREF4 . As such, BERT-QC is the first model to achieve strong performance across more than one question classification dataset."
],
"extractive_spans": [
"ARC ",
"TREC",
"GARD ",
"MLBioMedLAT "
],
"free_form_answer": "",
"highlighted_evidence": [
"While BERT-QC achieves large gains over existing methods on the ARC dataset, here we demonstrate that BERT-QC also matches state-of-the-art performance on TREC BIBREF6 , while surpassing state-of-the-art performance on the GARD corpus of consumer health questions BIBREF3 and MLBioMedLAT corpus of biomedical questions BIBREF4 ."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Apart from term frequency methods, question classification methods developed on one dataset generally do not exhibit strong transfer performance to other datasets BIBREF3 . While BERT-QC achieves large gains over existing methods on the ARC dataset, here we demonstrate that BERT-QC also matches state-of-the-art performance on TREC BIBREF6 , while surpassing state-of-the-art performance on the GARD corpus of consumer health questions BIBREF3 and MLBioMedLAT corpus of biomedical questions BIBREF4 . As such, BERT-QC is the first model to achieve strong performance across more than one question classification dataset."
],
"extractive_spans": [
"ARC",
"TREC",
"GARD",
"MLBioMedLAT"
],
"free_form_answer": "",
"highlighted_evidence": [
"Apart from term frequency methods, question classification methods developed on one dataset generally do not exhibit strong transfer performance to other datasets BIBREF3 . While BERT-QC achieves large gains over existing methods on the ARC dataset, here we demonstrate that BERT-QC also matches state-of-the-art performance on TREC BIBREF6 , while surpassing state-of-the-art performance on the GARD corpus of consumer health questions BIBREF3 and MLBioMedLAT corpus of biomedical questions BIBREF4 . "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"b7c2348cecc32c5bd50f01051b07263a612b68c6"
],
"answer": [
{
"evidence": [
"N-grams, POS, Hierarchical features: A baseline bag-of-words model incorporating both tagged and untagged unigrams and bigams. We also implement the hierarchical classification feature of Li and Roth BIBREF6 , where for a given question, the output of the classifier at coarser levels of granularity serves as input to the classifier at the current level of granularity.",
"CNN: Kim BIBREF28 demonstrated near state-of-the-art performance on a number of sentence classification tasks (including TREC question classification) by using pre-trained word embeddings BIBREF40 as feature extractors in a CNN model. Lei et al. BIBREF29 showed that 10 CNN variants perform within +/-2% of Kim's BIBREF28 model on TREC QC. We report performance of our best CNN model based on the MP-CNN architecture of Rao et al. BIBREF41 , which works to establish the similarity between question text and the definition text of the question classes. We adapt the MP-CNN model, which uses a “Siamese” structure BIBREF33 , to create separate representations for both the question and the question class. The model then makes use of a triple ranking loss function to minimize the distance between the representations of questions and the correct class while simultaneously maximising the distance between questions and incorrect classes. We optimize the network using the method of Tu BIBREF42 ."
],
"extractive_spans": [
"bag-of-words model",
"CNN"
],
"free_form_answer": "",
"highlighted_evidence": [
"N-grams, POS, Hierarchical features: A baseline bag-of-words model incorporating both tagged and untagged unigrams and bigams. ",
"CNN: Kim BIBREF28 demonstrated near state-of-the-art performance on a number of sentence classification tasks (including TREC question classification) by using pre-trained word embeddings BIBREF40 as feature extractors in a CNN model."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"3d177c051eb6d9c36323e0289e19878c7d681a06",
"f4e279eba101c0a566bbcc0aa653093e6acdef88"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": false,
"yes_no": false
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"6486b35619998c44190cfdc59bbd4fec79e67790",
"de4ff5546d2e96c00eff478f3032c28027154ec3"
],
"answer": [
{
"evidence": [
"Questions: We make use of the 7,787 science exam questions of the Aristo Reasoning Challenge (ARC) corpus BIBREF31 , which contains standardized 3rd to 9th grade science questions from 12 US states from the past decade. Each question is a 4-choice multiple choice question. Summary statistics comparing the complexity of ARC and TREC questions are shown in Table TABREF5 ."
],
"extractive_spans": [],
"free_form_answer": "from 3rd to 9th grade science questions collected from 12 US states",
"highlighted_evidence": [
"Questions: We make use of the 7,787 science exam questions of the Aristo Reasoning Challenge (ARC) corpus BIBREF31 , which contains standardized 3rd to 9th grade science questions from 12 US states from the past decade. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Questions: We make use of the 7,787 science exam questions of the Aristo Reasoning Challenge (ARC) corpus BIBREF31 , which contains standardized 3rd to 9th grade science questions from 12 US states from the past decade. Each question is a 4-choice multiple choice question. Summary statistics comparing the complexity of ARC and TREC questions are shown in Table TABREF5 ."
],
"extractive_spans": [],
"free_form_answer": "Used from science exam questions of the Aristo Reasoning Challenge (ARC) corpus.",
"highlighted_evidence": [
"We make use of the 7,787 science exam questions of the Aristo Reasoning Challenge (ARC) corpus BIBREF31 , which contains standardized 3rd to 9th grade science questions from 12 US states from the past decade. ",
" science exam questions of the Aristo Reasoning Challenge (ARC) corpus"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
],
"nlp_background": [
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
""
],
"question": [
"Which datasets are used for evaluation?",
"What previous methods is their model compared to?",
"Did they use a crowdsourcing platform?",
"How was the dataset collected?"
],
"question_id": [
"a3bb9a936f61bafb509fa12ac0a61f91abcc5106",
"df6d327e176740da9edcc111a06374c54c8e809c",
"49764eee7fb523a6a28375cc699f5e0220b81766",
"3321d8d0e190d25958e5bfe0f3438b5c2ba80fd1"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
""
]
} | {
"caption": [
"Figure 1: Identifying the detailed problem domain of a question (QC label) can provide an important contextual signal to guide a QA system to the correct answer (A’). Here, knowing the problem domain of Gravitational Pull allows the model to recognize that some properties (such as weight) change when objects move between celestial bodies, while others (including density) are unaffected by such a change.",
"Table 1: Summary statistics comparing the surface and syntactic complexity of the TREC, GARD, and ARC datasets. ARC questions are complex, syntactically-diverse, and paired with a detailed classification scheme developed in this work.",
"Table 2: A subset (approximately 10%) of our question classification taxonomy for science exams, with top-level categories in bold. The full taxonomy contains 462 categories, with 406 of these having non-zero counts in the ARC corpus. “Prop.” represents the proportion of questions in ARC belonging to a given category. One branch of the taxonomy (Life Science → ... → Seed Dispersal) has been expanded to full depth.",
"Table 3: Results of the empirical evaluation on each of the question classification models on the ARC science question dataset, broken down by classification granularity (coarse (L1) to fine (L6)). Performance reflects mean average precision (MAP), where a duplicate table showing P@1 is included in the appendix. The best model at each level of granularity is shown in bold. * signifies that a given model is significantly better than baseline performance at full granularity (p < 0.01).",
"Table 4: Performance of BERT-QC on the TREC-6 (6 coarse categories) and TREC-50 (50 fine-grained categories) question classification task, in context with recent learned or rule-based models. Bold values represent top reported learned model performance. BERT-QC achieves performance similar to or exceeding the top reported non-rule-based models.",
"Table 5: Performance of BERT-QC on the GARD consumer health question dataset, which contains 2,937 questions labeled with 13 medical question classification categories. Following Roberts et al. (2014), this dataset was evaluated using 5-fold crossvalidation.",
"Table 6: Performance of BERT-QC on the MLBioMedLAT biomedical question dataset, which contains 780 questions labeled with 88 medical question classification categories. Following Wasim et al. (2019), this dataset was evaluated using 10-fold crossvalidation. Micro-F1 (µF1) and Accuracy follow Wasim et al.’s definitions for multi-label tasks.",
"Figure 2: Question answering performance (the proportion of questions answered correctly) for models that include question classification labels using query expansion, compared to a nolabel baseline model. While BERT-QC trained using question and answer text achieves higher QC performance, it leads to unstable QA performance due to it’s errors being highly correlated with incorrect answers. Predicted labels using BERT-QC (question text only) show a significant increase of +1.7% P@1 at L6 (p < 0.01). Models with gold labels show the ceiling performance of this approach with perfect question classification performance. Each point represents the average of 10 runs.",
"Table 8: An example of the query expansion technique for question classification labels, where the definition text for the QC label is appended to the question. Here, the gold label for this question is “MAT COS BOILING” (Matter → Changes of State → Boiling).",
"Table 9: Analysis of question answering performance on specific question classes on the BERT-QA model (L6). Question classes in this table are at the L2 level of specificity. Performance is reported on the development set, where N represents the total number of questions within a given question class.",
"Table 10: Interannotator Agreement at L6 (the native level the annotation was completed at), as well as agreement for truncated levels of the heirarchy from coarse to fine classification.",
"Table 11: Performance of each question classification model, expressed in Precision@1 (P@1). * signifies a given model is significantly different from the baseline model (p < 0.01).",
"Figure 3: Analysis of noisy question classification labels on overall QA performance. Here, the X axis represents the proportion of gold QA labels that have been randomly switched to another of the 406 possible labels at the finest level of granularity in the classification taxonomy (L6). QA performance decreases approximately linearly as the proportion of noisy QC labels increases. Each point represents the average of 20 experimental runs, with different questions and random labels for each run. QA performance reported is on the development set. Note that due to the runtime associated with this analysis, the results reported are using the BERT-Base model."
],
"file": [
"1-Figure1-1.png",
"2-Table1-1.png",
"3-Table2-1.png",
"4-Table3-1.png",
"4-Table4-1.png",
"5-Table5-1.png",
"5-Table6-1.png",
"6-Figure2-1.png",
"6-Table8-1.png",
"7-Table9-1.png",
"8-Table10-1.png",
"9-Table11-1.png",
"10-Figure3-1.png"
]
} | [
"How was the dataset collected?"
] | [
[
"1908.05441-Questions and Classification Taxonomy-0"
]
] | [
"Used from science exam questions of the Aristo Reasoning Challenge (ARC) corpus."
] | 165 |
1811.08603 | Neural Collective Entity Linking | Entity Linking aims to link entity mentions in texts to knowledge bases, and neural models have achieved recent success in this task. However, most existing methods rely on local contexts to resolve entities independently, which may usually fail due to the data sparsity of local information. To address this issue, we propose a novel neural model for collective entity linking, named as NCEL. NCEL applies Graph Convolutional Network to integrate both local contextual features and global coherence information for entity linking. To improve the computation efficiency, we approximately perform graph convolution on a subgraph of adjacent entity mentions instead of those in the entire text. We further introduce an attention scheme to improve the robustness of NCEL to data noise and train the model on Wikipedia hyperlinks to avoid overfitting and domain bias. In experiments, we evaluate NCEL on five publicly available datasets to verify the linking performance as well as generalization ability. We also conduct an extensive analysis of time complexity, the impact of key modules, and qualitative results, which demonstrate the effectiveness and efficiency of our proposed method. | {
"paragraphs": [
[
" This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http://creativecommons.org/licenses/by/4.0/",
"Entity linking (EL), mapping entity mentions in texts to a given knowledge base (KB), serves as a fundamental role in many fields, such as question answering BIBREF0 , semantic search BIBREF1 , and information extraction BIBREF2 , BIBREF3 . However, this task is non-trivial because entity mentions are usually ambiguous. As shown in Figure FIGREF1 , the mention England refers to three entities in KB, and an entity linking system should be capable of identifying the correct entity as England cricket team rather than England and England national football team.",
"Entity linking is typically broken down into two main phases: (i) candidate generation obtains a set of referent entities in KB for each mention, and (ii) named entity disambiguation selects the possible candidate entity by solving a ranking problem. The key challenge lies in the ranking model that computes the relevance between candidates and the corresponding mentions based on the information both in texts and KBs BIBREF4 . In terms of the features used for ranking, we classify existing EL models into two groups: local models to resolve mentions independently relying on textual context information from the surrounding words BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , and global (collective) models, which are the main focus of this paper, that encourage the target entities of all mentions in a document to be topically coherent BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 .",
"Global models usually build an entity graph based on KBs to capture coherent entities for all identified mentions in a document, where the nodes are entities, and edges denote their relations. The graph provides highly discriminative semantic signals (e.g., entity relatedness) that are unavailable to local model BIBREF15 . For example (Figure FIGREF1 ), an EL model seemly cannot find sufficient disambiguation clues for the mention England from its surrounding words, unless it utilizes the coherence information of consistent topic “cricket\" among adjacent mentions England, Hussain, and Essex. Although the global model has achieved significant improvements, its limitation is threefold:",
"To mitigate the first limitation, recent EL studies introduce neural network (NN) models due to its amazing feature abstraction and generalization ability. In such models, words/entities are represented by low dimensional vectors in a continuous space, and features for mention as well as candidate entities are automatically learned from data BIBREF4 . However, existing NN-based methods for EL are either local models BIBREF16 , BIBREF17 or merely use word/entity embeddings for feature extraction and rely on another modules for collective disambiguation, which thus cannot fully utilize the power of NN models for collective EL BIBREF18 , BIBREF19 , BIBREF20 .",
"The second drawback of the global approach has been alleviated through approximate optimization techniques, such as PageRank/random walks BIBREF21 , graph pruning BIBREF22 , ranking SVMs BIBREF23 , or loopy belief propagation (LBP) BIBREF18 , BIBREF24 . However, these methods are not differentiable and thus difficult to be integrated into neural network models (the solution for the first limitation).",
"To overcome the third issue of inadequate training data, BIBREF17 has explored a massive amount of hyperlinks in Wikipedia, but these potential annotations for EL contain much noise, which may distract a naive disambiguation model BIBREF6 .",
"In this paper, we propose a novel Neural Collective Entity Linking model (NCEL), which performs global EL combining deep neural networks with Graph Convolutional Network (GCN) BIBREF25 , BIBREF26 that allows flexible encoding of entity graphs. It integrates both local contextual information and global interdependence of mentions in a document, and is efficiently trainable in an end-to-end fashion. Particularly, we introduce attention mechanism to robustly model local contextual information by selecting informative words and filtering out the noise. On the other hand, we apply GCNs to improve discriminative signals of candidate entities by exploiting the rich structure underlying the correct entities. To alleviate the global computations, we propose to convolute on the subgraph of adjacent mentions. Thus, the overall coherence shall be achieved in a chain-like way via a sliding window over the document. To the best of our knowledge, this is the first effort to develop a unified model for neural collective entity linking.",
"In experiments, we first verify the efficiency of NCEL via theoretically comparing its time complexity with other collective alternatives. Afterwards, we train our neural model using collected Wikipedia hyperlinks instead of dataset-specific annotations, and perform evaluations on five public available benchmarks. The results show that NCEL consistently outperforms various baselines with a favorable generalization ability. Finally, we further present the performance on a challenging dataset WW BIBREF19 as well as qualitative results, investigating the effectiveness of each key module."
],
[
"We denote INLINEFORM0 as a set of entity mentions in a document INLINEFORM1 , where INLINEFORM2 is either a word INLINEFORM3 or a mention INLINEFORM4 . INLINEFORM5 is the entity graph for document INLINEFORM6 derived from the given knowledge base, where INLINEFORM7 is a set of entities, INLINEFORM8 denotes the relatedness between INLINEFORM9 and higher values indicate stronger relations. Based on INLINEFORM10 , we extract a subgraph INLINEFORM11 for INLINEFORM12 , where INLINEFORM13 denotes the set of candidate entities for INLINEFORM14 . Note that we don't include the relations among candidates of the same mention in INLINEFORM15 because these candidates are mutually exclusive in disambiguation.",
"Formally, we define the entity linking problem as follows: Given a set of mentions INLINEFORM0 in a document INLINEFORM1 , and an entity graph INLINEFORM2 , the goal is to find an assignment INLINEFORM3 .",
"To collectively find the best assignment, NCEL aims to improve the discriminability of candidates' local features by using entity relatedness within a document via GCN, which is capable of learning a function of features on the graph through shared parameters over all nodes. Figure FIGREF10 shows the framework of NCEL including three main components:",
"Example As shown in Figure FIGREF10 , for the current mention England, we utilize its surrounding words as local contexts (e.g., surplus), and adjacent mentions (e.g., Hussian) as global information. Collectively, we utilize the candidates of England INLINEFORM0 as well as those entities of its adjacencies INLINEFORM1 to construct feature vectors for INLINEFORM2 and the subgraph of relatedness as inputs of our neural model. Let darker blue indicate higher probability of being predicted, the correct candidate INLINEFORM3 becomes bluer due to its bluer neighbor nodes of other mentions INLINEFORM4 . The dashed lines denote entity relations that have indirect impacts through the sliding adjacent window , and the overall structure shall be achieved via multiple sub-graphs by traversing all mentions.",
"Before introducing our model, we first describe the component of candidate generation."
],
[
"Similar to previous work BIBREF24 , we use the prior probability INLINEFORM0 of entity INLINEFORM1 conditioned on mention INLINEFORM2 both as a local feature and to generate candidate entities: INLINEFORM3 . We compute INLINEFORM4 based on statistics of mention-entity pairs from: (i) Wikipedia page titles, redirect titles and hyperlinks, (ii) the dictionary derived from a large Web Corpus BIBREF27 , and (iii) the YAGO dictionary with a uniform distribution BIBREF22 . We pick up the maximal prior if a mention-entity pair occurs in different resources. In experiments, to optimize for memory and run time, we keep only top INLINEFORM5 entities based on INLINEFORM6 . In the following two sections, we will present the key components of NECL, namely feature extraction and neural network for collective entity linking."
],
[
"The main goal of NCEL is to find a solution for collective entity linking using an end-to-end neural model, rather than to improve the measurements of local textual similarity or global mention/entity relatedness. Therefore, we use joint embeddings of words and entities at sense level BIBREF28 to represent mentions and its contexts for feature extraction. In this section, we give a brief description of our embeddings followed by our features used in the neural model."
],
[
"Following BIBREF28 , we use Wikipedia articles, hyperlinks, and entity outlinks to jointly learn word/mention and entity embeddings in a unified vector space, so that similar words/mentions and entities have similar vectors. To address the ambiguity of words/mentions, BIBREF28 represents each word/mention with multiple vectors, and each vector denotes a sense referring to an entity in KB. The quality of the embeddings is verified on both textual similarity and entity relatedness tasks.",
"Formally, each word/mention has a global embedding INLINEFORM0 , and multiple sense embeddings INLINEFORM1 . Each sense embedding INLINEFORM2 refers to an entity embedding INLINEFORM3 , while the difference between INLINEFORM4 and INLINEFORM5 is that INLINEFORM6 models the co-occurrence information of an entity in texts (via hyperlinks) and INLINEFORM7 encodes the structured entity relations in KBs. More details can be found in the original paper."
],
[
"Local features focus on how compatible the entity is mentioned in a piece of text (i.e., the mention and the context words). Except for the prior probability (Section SECREF9 ), we define two types of local features for each candidate entity INLINEFORM0 :",
"String Similarity Similar to BIBREF16 , we define string based features as follows: the edit distance between mention's surface form and entity title, and boolean features indicating whether they are equivalent, whether the mention is inside, starts with or ends with entity title and vice versa.",
"Compatibility We also measure the compatibility of INLINEFORM0 with the mention's context words INLINEFORM1 by computing their similarities based on joint embeddings: INLINEFORM2 and INLINEFORM3 , where INLINEFORM4 is the context embedding of INLINEFORM5 conditioned on candidate INLINEFORM6 and is defined as the average sum of word global vectors weighted by attentions: INLINEFORM7 ",
"where INLINEFORM0 is the INLINEFORM1 -th word's attention from INLINEFORM2 . In this way, we automatically select informative words by assigning higher attention weights, and filter out irrelevant noise through small weights. The attention INLINEFORM3 is computed as follows: INLINEFORM4 ",
"where INLINEFORM0 is the similarity measurement, and we use cosine similarity in the presented work. We concatenate the prior probability, string based similarities, compatibility similarities and the embeddings of contexts as well as the entity as the local feature vectors."
],
[
"The key idea of collective EL is to utilize the topical coherence throughout the entire document. The consistency assumption behind it is that: all mentions in a document shall be on the same topic. However, this leads to exhaustive computations if the number of mentions is large. Based on the observation that the consistency attenuates along with the distance between two mentions, we argue that the adjacent mentions might be sufficient for supporting the assumption efficiently.",
"Formally, we define neighbor mentions as INLINEFORM0 adjacent mentions before and after current mention INLINEFORM1 : INLINEFORM2 , where INLINEFORM3 is the pre-defined window size. Thus, the topical coherence at document level shall be achieved in a chain-like way. As shown in Figure FIGREF10 ( INLINEFORM4 ), mentions Hussain and Essex, a cricket player and the cricket club, provide adequate disambiguation clues to induce the underlying topic “cricket\" for the current mention England, which impacts positively on identifying the mention surrey as another cricket club via the common neighbor mention Essex.",
"A degraded case happens if INLINEFORM0 is large enough to cover the entire document, and the mentions used for global features become the same as the previous work, such as BIBREF21 . In experiments, we heuristically found a suitable INLINEFORM1 which is much smaller than the total number of mentions. The benefits of efficiency are in two ways: (i) to decrease time complexity, and (ii) to trim the entity graph into a fixed size of subgraph that facilitates computation acceleration through GPUs and batch techniques, which will be discussed in Section SECREF24 .",
"Given neighbor mentions INLINEFORM0 , we extract two types of vectorial global features and structured global features for each candidate INLINEFORM1 :",
"Neighbor Mention Compatibility Suppose neighbor mentions are topical coherent, a candidate entity shall also be compatible with neighbor mentions if it has a high compatibility score with the current mention, otherwise not. That is, we extract the vectorial global features by computing the similarities between INLINEFORM0 and all neighbor mentions: INLINEFORM1 , where INLINEFORM2 is the mention embedding by averaging the global vectors of words in its surface form: INLINEFORM3 , where INLINEFORM4 are tokenized words of mention INLINEFORM5 .",
"Subgraph Structure The above features reflect the consistent semantics in texts (i.e., mentions). We now extract structured global features using the relations in KB, which facilitates the inference among candidates to find the most topical coherent subset. For each document, we obtain the entity graph INLINEFORM0 by taking candidate entities of all mentions INLINEFORM1 as nodes, and using entity embeddings to compute their similarities as edges INLINEFORM2 . Then, we extract the subgraph structured features INLINEFORM3 for each entity INLINEFORM4 for efficiency.",
"Formally, we define the subgraph as: INLINEFORM0 , where INLINEFORM1 . For example (Figure FIGREF1 ), for entity England cricket team, the subgraph contains the relation from it to all candidates of neighbor mentions: England cricket team, Nasser Hussain (rugby union), Nasser Hussain, Essex, Essex County Cricket Club and Essex, New York. To support batch-wise acceleration, we represent INLINEFORM2 in the form of adjacency table based vectors: INLINEFORM3 , where INLINEFORM4 is the number of candidates per mention.",
"Finally, for each candidate INLINEFORM0 , we concatenate local features and neighbor mention compatibility scores as the feature vector INLINEFORM1 , and construct the subgraph structure representation INLINEFORM2 as the inputs of NCEL."
],
[
"NCEL incorporates GCN into a deep neural network to utilize structured graph information for collectively feature abstraction, while differs from conventional GCN in the way of applying the graph. Instead of the entire graph, only a subset of nodes is “visible\" to each node in our proposed method, and then the overall structured information shall be reached in a chain-like way. Fixing the size of the subset, NCEL is further speeded up by batch techniques and GPUs, and is efficient to large-scale data."
],
[
"GCNs are a type of neural network model that deals with structured data. It takes a graph as an input and output labels for each node. As a simplification of spectral graph convolutions, the main idea of BIBREF26 is similar to a propagation model: to enhance the features of a node according to its neighbor nodes. The formulation is as follows: INLINEFORM0 ",
"where INLINEFORM0 is a normalized adjacent matrix of the input graph with self-connection, INLINEFORM1 and INLINEFORM2 are the hidden states and weights in the INLINEFORM3 -th layer, and INLINEFORM4 is a non-linear activation, such as ReLu."
],
[
"As shown in Figure FIGREF10 , NCEL identifies the correct candidate INLINEFORM0 for the mention INLINEFORM1 by using vectorial features as well as structured relatedness with candidates of neighbor mentions INLINEFORM2 . Given feature vector INLINEFORM3 and subgraph representation INLINEFORM4 of each candidate INLINEFORM5 , we stack them as inputs for mention INLINEFORM6 : INLINEFORM7 , and the adjacent matrix INLINEFORM8 , where INLINEFORM9 denotes the subgraph with self-connection. We normalize INLINEFORM10 such that all rows sum to one, denoted as INLINEFORM11 , avoiding the change in the scale of the feature vectors.",
"Given INLINEFORM0 and INLINEFORM1 , the goal of NCEL is to find the best assignment: INLINEFORM2 ",
"where INLINEFORM0 is the output variable of candidates, and INLINEFORM1 is a probability function as follows: INLINEFORM2 ",
"where INLINEFORM0 is the score function parameters by INLINEFORM1 . NCEL learns the mapping INLINEFORM2 through a neural network including three main modules: encoder, sub-graph convolution network (sub-GCN) and decoder. Next, we introduce them in turn.",
"Encoder The function of this module is to integrate different features by a multi-layer perceptron (MLP): INLINEFORM0 ",
"where INLINEFORM0 is the hidden states of the current mention, INLINEFORM1 and INLINEFORM2 are trainable parameters and bias. We use ReLu as the non-linear activation INLINEFORM3 .",
"Sub-Graph Convolution Network Similar to GCN, this module learns to abstract features from the hidden state of the mention itself as well as its neighbors. Suppose INLINEFORM0 is the hidden states of the neighbor INLINEFORM1 , we stack them to expand the current hidden states of INLINEFORM2 as INLINEFORM3 , such that each row corresponds to that in the subgraph adjacent matrix INLINEFORM4 . We define sub-graph convolution as: INLINEFORM5 ",
"where INLINEFORM0 is a trainable parameter.",
"Decoder After INLINEFORM0 iterations of sub-graph convolution, the hidden states integrate both features of INLINEFORM1 and its neighbors. A fully connected decoder maps INLINEFORM2 to the number of candidates as follows: INLINEFORM3 ",
"where INLINEFORM0 ."
],
[
"The parameters of network are trained to minimize cross-entropy of the predicted and ground truth INLINEFORM0 : INLINEFORM1 ",
"Suppose there are INLINEFORM0 documents in training corpus, each document has a set of mentions INLINEFORM1 , leading to totally INLINEFORM2 mention sets. The overall objective function is as follows: INLINEFORM3 "
],
[
"To avoid overfitting with some dataset, we train NCEL using collected Wikipedia hyperlinks instead of specific annotated data. We then evaluate the trained model on five different benchmarks to verify the linking precision as well as the generalization ability. Furthermore, we investigate the effectiveness of key modules in NCEL and give qualitative results for comprehensive analysis."
],
[
"We compare NCEL with the following state-of-the-art EL methods including three local models and three types of global models:",
"Local models: He BIBREF29 and Chisholm BIBREF6 beat many global models by using auto-encoders and web links, respectively, and NTEE BIBREF16 achieves the best performance based on joint embeddings of words and entities.",
"Iterative model: AIDA BIBREF22 links entities by iteratively finding a dense subgraph.",
"Loopy Belief Propagation: Globerson BIBREF18 and PBoH BIBREF30 introduce LBP BIBREF31 techniques for collective inference, and Ganea BIBREF24 solves the global training problem via truncated fitting LBP.",
"PageRank/Random Walk: Boosting BIBREF32 , AGDISTISG BIBREF33 , Babelfy BIBREF34 , WAT BIBREF35 , xLisa BIBREF36 and WNED BIBREF19 performs PageRank BIBREF37 or random walk BIBREF38 on the mention-entity graph and use the convergence score for disambiguation.",
"For fairly comparison, we report the original scores of the baselines in the papers. Following these methods, we evaluate NCEL on the following five datasets: (1) CoNLL-YAGO BIBREF22 : the CoNLL 2003 shared task including testa of 4791 mentions in 216 documents, and testb of 4485 mentions in 213 documents. (2) TAC2010 BIBREF39 : constructed for the Text Analysis Conference that comprises 676 mentions in 352 documents for testing. (3) ACE2004 BIBREF23 : a subset of ACE2004 co-reference documents including 248 mentions in 35 documents, which is annotated by Amazon Mechanical Turk. (4) AQUAINT BIBREF40 : 50 news articles including 699 mentions from three different news agencies. (5) WW BIBREF19 : a new benchmark with balanced prior distributions of mentions, leading to a hard case of disambiguation. It has 6374 mentions in 310 documents automatically extracted from Wikipedia."
],
[
"Training We collect 50,000 Wikipedia articles according to the number of its hyperlinks as our training data. For efficiency, we trim the articles to the first three paragraphs leading to 1,035,665 mentions in total. Using CoNLL-Test A as the development set, we evaluate the trained NCEL on the above benchmarks. We set context window to 20, neighbor mention window to 6, and top INLINEFORM0 candidates for each mention. We use two layers with 2000 and 1 hidden units in MLP encoder, and 3 layers in sub-GCN. We use early stop and fine tune the embeddings. With a batch size of 16, nearly 3 epochs cost less than 15 minutes on the server with 20 core CPU and the GeForce GTX 1080Ti GPU with 12Gb memory. We use standard Precision, Recall and F1 at mention level (Micro) and at the document level (Macro) as measurements.",
"Complexity Analysis Compared with local methods, the main disadvantage of collective methods is high complexity and expensive costs. Suppose there are INLINEFORM0 mentions in documents on average, among these global models, NCEL not surprisingly has the lowest time complexity INLINEFORM1 since it only considers adjacent mentions, where INLINEFORM2 is the number of sub-GCN layers indicating the iterations until convergence. AIDA has the highest time complexity INLINEFORM3 in worst case due to exhaustive iteratively finding and sorting the graph. The LBP and PageRank/random walk based methods achieve similar high time complexity of INLINEFORM4 mainly because of the inference on the entire graph."
],
[
"GERBIL BIBREF41 is a benchmark entity annotation framework that aims to provide a unified comparison among different EL methods across datasets including ACE2004, AQUAINT and CoNLL. We compare NCEL with the global models that report the performance on GERBIL.",
"As shown in Table TABREF26 , NCEL achieves the best performance in most cases with an average gain of 2% on Micro F1 and 3% Macro F1. The baseline methods also achieve competitive results on some datasets but fail to adapt to the others. For example, AIDA and xLisa perform quite well on ACE2004 but poorly on other datasets, or WAT, PBoH, and WNED have a favorable performance on CoNLL but lower values on ACE2004 and AQUAINT. Our proposed method performs consistently well on all datasets that demonstrates the good generalization ability."
],
[
"In this section, we investigate the effectiveness of NCEL in the “easy\" and “hard\" datasets, respectively. Particularly, TAC2010, which has two mentions per document on average (Section SECREF19 ) and high prior probabilities of correct candidates (Figure FIGREF28 ), is regarded as the “easy\" case for EL, and WW is the “hard\" case since it has the most mentions with balanced prior probabilities BIBREF19 . Besides, we further compare the impact of key modules by removing the following part from NCEL: global features (NCEL-local), attention (NCEL-noatt), embedding features (NCEL-noemb), and the impact of the prior probability (prior).",
"The results are shown in Table FIGREF28 and Table FIGREF28 . We can see the average linking precision (Micro) of WW is lower than that of TAC2010, and NCEL outperforms all baseline methods in both easy and hard cases. In the “easy\" case, local models have similar performance with global models since only little global information is available (2 mentions per document). Besides, NN-based models, NTEE and NCEL-local, perform significantly better than others including most global models, demonstrating that the effectiveness of neural models deals with the first limitation in the introduction.",
"Impact of NCEL Modules ",
"As shown in Figure FIGREF28 , the prior probability performs quite well in TAC2010 but poorly in WW. Compared with NCEL-local, the global module in NCEL brings more improvements in the “hard\" case than that for “easy\" dataset, because local features are discriminative enough in most cases of TAC2010, and global information becomes quite helpful when local features cannot handle. That is, our propose collective model is robust and shows a good generalization ability to difficult EL. The improvements by each main module are relatively small in TAC2010, while the modules of attention and embedding features show non-negligible impacts in WW (even worse than local model), mainly because WW contains much noise, and these two modules are effective in improving the robustness to noise and the ability of generalization by selecting informative words and providing more accurate semantics, respectively."
],
[
"The results of example in Figure FIGREF1 are shown in Table TABREF30 , which is from CoNLL testa dataset. For mention Essex, although both NCEL and NCEL-local correctly identify entity Essex County Cricket Club, NCEL outputs higher probability due to the enhancement of neighbor mentions. Moreover, for mention England, NCEL-local cannot find enough disambiguation clues from its context words, such as surplus and requirements, and thus assigns a higher probability of 0.42 to the country England according to the prior probability. Collectively, NCEL correctly identifies England cricket team with a probability of 0.72 as compared with 0.20 in NCEL-local with the help of its neighbor mention Essex."
],
[
"In this paper, we propose a neural model for collective entity linking that is end-to-end trainable. It applies GCN on subgraphs instead of the entire entity graph to efficiently learn features from both local and global information. We design an attention mechanism that endows NCEL robust to noisy data. Trained on collected Wikipedia hyperlinks, NCEL outperforms the state-of-the-art collective methods across five different datasets. Besides, further analysis of the impacts of main modules as well as qualitative results demonstrates its effectiveness.",
"In the future, we will extend our method into cross-lingual settings to help link entities in low-resourced languages by exploiting rich knowledge from high-resourced languages, and deal with NIL entities to facilitate specific applications."
],
[
"The work is supported by National Key Research and Development Program of China (2017YFB1002101), NSFC key project (U1736204, 61661146007), and THUNUS NExT Co-Lab. "
]
],
"section_name": [
"Introduction",
"Preliminaries and Framework",
"Candidate Generation",
"Feature Extraction",
"Learning Joint Embeddings of Word and Entity",
"Local Features",
"Global Features",
"Neural Collective Entity Linking",
"Graph Convolutional Network",
"Model Architecture",
"Training",
"Experiments",
"Baselines and Datasets",
"Training Details and Running Time Analysis",
"Results on GERBIL",
"Results on TAC2010 and WW",
"Qualitative Analysis",
"Conclusion",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"5f3f44eecdffe1f74a41e68f923f180a99a1f589",
"6434cfacfb86aef132f5c648cd369c40e411c116"
],
"answer": [
{
"evidence": [
"For fairly comparison, we report the original scores of the baselines in the papers. Following these methods, we evaluate NCEL on the following five datasets: (1) CoNLL-YAGO BIBREF22 : the CoNLL 2003 shared task including testa of 4791 mentions in 216 documents, and testb of 4485 mentions in 213 documents. (2) TAC2010 BIBREF39 : constructed for the Text Analysis Conference that comprises 676 mentions in 352 documents for testing. (3) ACE2004 BIBREF23 : a subset of ACE2004 co-reference documents including 248 mentions in 35 documents, which is annotated by Amazon Mechanical Turk. (4) AQUAINT BIBREF40 : 50 news articles including 699 mentions from three different news agencies. (5) WW BIBREF19 : a new benchmark with balanced prior distributions of mentions, leading to a hard case of disambiguation. It has 6374 mentions in 310 documents automatically extracted from Wikipedia."
],
"extractive_spans": [
"CoNLL-YAGO",
"TAC2010",
"ACE2004",
"AQUAINT",
"WW"
],
"free_form_answer": "",
"highlighted_evidence": [
"Following these methods, we evaluate NCEL on the following five datasets: (1) CoNLL-YAGO BIBREF22 : the CoNLL 2003 shared task including testa of 4791 mentions in 216 documents, and testb of 4485 mentions in 213 documents. (2) TAC2010 BIBREF39 : constructed for the Text Analysis Conference that comprises 676 mentions in 352 documents for testing. (3) ACE2004 BIBREF23 : a subset of ACE2004 co-reference documents including 248 mentions in 35 documents, which is annotated by Amazon Mechanical Turk. (4) AQUAINT BIBREF40 : 50 news articles including 699 mentions from three different news agencies. (5) WW BIBREF19 : a new benchmark with balanced prior distributions of mentions, leading to a hard case of disambiguation. It has 6374 mentions in 310 documents automatically extracted from Wikipedia."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"For fairly comparison, we report the original scores of the baselines in the papers. Following these methods, we evaluate NCEL on the following five datasets: (1) CoNLL-YAGO BIBREF22 : the CoNLL 2003 shared task including testa of 4791 mentions in 216 documents, and testb of 4485 mentions in 213 documents. (2) TAC2010 BIBREF39 : constructed for the Text Analysis Conference that comprises 676 mentions in 352 documents for testing. (3) ACE2004 BIBREF23 : a subset of ACE2004 co-reference documents including 248 mentions in 35 documents, which is annotated by Amazon Mechanical Turk. (4) AQUAINT BIBREF40 : 50 news articles including 699 mentions from three different news agencies. (5) WW BIBREF19 : a new benchmark with balanced prior distributions of mentions, leading to a hard case of disambiguation. It has 6374 mentions in 310 documents automatically extracted from Wikipedia."
],
"extractive_spans": [
"CoNLL-YAGO",
"TAC2010",
"ACE2004",
"AQUAINT",
"WW"
],
"free_form_answer": "",
"highlighted_evidence": [
"Following these methods, we evaluate NCEL on the following five datasets: (1) CoNLL-YAGO BIBREF22 : the CoNLL 2003 shared task including testa of 4791 mentions in 216 documents, and testb of 4485 mentions in 213 documents. (2) TAC2010 BIBREF39 : constructed for the Text Analysis Conference that comprises 676 mentions in 352 documents for testing. (3) ACE2004 BIBREF23 : a subset of ACE2004 co-reference documents including 248 mentions in 35 documents, which is annotated by Amazon Mechanical Turk. (4) AQUAINT BIBREF40 : 50 news articles including 699 mentions from three different news agencies. (5) WW BIBREF19 : a new benchmark with balanced prior distributions of mentions, leading to a hard case of disambiguation. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"5399519e2afc7b1bb9b9d1a853ca979d48396716"
],
"answer": [
{
"evidence": [
"In experiments, we first verify the efficiency of NCEL via theoretically comparing its time complexity with other collective alternatives. Afterwards, we train our neural model using collected Wikipedia hyperlinks instead of dataset-specific annotations, and perform evaluations on five public available benchmarks. The results show that NCEL consistently outperforms various baselines with a favorable generalization ability. Finally, we further present the performance on a challenging dataset WW BIBREF19 as well as qualitative results, investigating the effectiveness of each key module."
],
"extractive_spans": [
"NCEL consistently outperforms various baselines with a favorable generalization ability"
],
"free_form_answer": "",
"highlighted_evidence": [
"The results show that NCEL consistently outperforms various baselines with a favorable generalization ability."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"46ee2a5b61b65e85b6cc5070e084686ab262199b",
"6b9c6110343630144ad13c8b955117c037c36d59"
],
"answer": [
{
"evidence": [
"Training We collect 50,000 Wikipedia articles according to the number of its hyperlinks as our training data. For efficiency, we trim the articles to the first three paragraphs leading to 1,035,665 mentions in total. Using CoNLL-Test A as the development set, we evaluate the trained NCEL on the above benchmarks. We set context window to 20, neighbor mention window to 6, and top INLINEFORM0 candidates for each mention. We use two layers with 2000 and 1 hidden units in MLP encoder, and 3 layers in sub-GCN. We use early stop and fine tune the embeddings. With a batch size of 16, nearly 3 epochs cost less than 15 minutes on the server with 20 core CPU and the GeForce GTX 1080Ti GPU with 12Gb memory. We use standard Precision, Recall and F1 at mention level (Micro) and at the document level (Macro) as measurements.",
"The results are shown in Table FIGREF28 and Table FIGREF28 . We can see the average linking precision (Micro) of WW is lower than that of TAC2010, and NCEL outperforms all baseline methods in both easy and hard cases. In the “easy\" case, local models have similar performance with global models since only little global information is available (2 mentions per document). Besides, NN-based models, NTEE and NCEL-local, perform significantly better than others including most global models, demonstrating that the effectiveness of neural models deals with the first limitation in the introduction."
],
"extractive_spans": [],
"free_form_answer": "By calculating Macro F1 metric at the document level.",
"highlighted_evidence": [
" We use standard Precision, Recall and F1 at mention level (Micro) and at the document level (Macro) as measurements.",
"We can see the average linking precision (Micro) of WW is lower than that of TAC2010, and NCEL outperforms all baseline methods in both easy and hard cases."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"To avoid overfitting with some dataset, we train NCEL using collected Wikipedia hyperlinks instead of specific annotated data. We then evaluate the trained model on five different benchmarks to verify the linking precision as well as the generalization ability. Furthermore, we investigate the effectiveness of key modules in NCEL and give qualitative results for comprehensive analysis."
],
"extractive_spans": [],
"free_form_answer": "by evaluating their model on five different benchmarks",
"highlighted_evidence": [
"We then evaluate the trained model on five different benchmarks to verify the linking precision as well as the generalization ability. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"4d1b951d12b7133464b910ca860e4cd6ab2abec2",
"701a28bf1b42e3be1239351325f923a89fd4fa59"
],
"answer": [
{
"evidence": [
"Complexity Analysis Compared with local methods, the main disadvantage of collective methods is high complexity and expensive costs. Suppose there are INLINEFORM0 mentions in documents on average, among these global models, NCEL not surprisingly has the lowest time complexity INLINEFORM1 since it only considers adjacent mentions, where INLINEFORM2 is the number of sub-GCN layers indicating the iterations until convergence. AIDA has the highest time complexity INLINEFORM3 in worst case due to exhaustive iteratively finding and sorting the graph. The LBP and PageRank/random walk based methods achieve similar high time complexity of INLINEFORM4 mainly because of the inference on the entire graph."
],
"extractive_spans": [],
"free_form_answer": "NCEL considers only adjacent mentions.",
"highlighted_evidence": [
"Suppose there are INLINEFORM0 mentions in documents on average, among these global models, NCEL not surprisingly has the lowest time complexity INLINEFORM1 since it only considers adjacent mentions, where INLINEFORM2 is the number of sub-GCN layers indicating the iterations until convergence."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Formally, we define neighbor mentions as INLINEFORM0 adjacent mentions before and after current mention INLINEFORM1 : INLINEFORM2 , where INLINEFORM3 is the pre-defined window size. Thus, the topical coherence at document level shall be achieved in a chain-like way. As shown in Figure FIGREF10 ( INLINEFORM4 ), mentions Hussain and Essex, a cricket player and the cricket club, provide adequate disambiguation clues to induce the underlying topic “cricket\" for the current mention England, which impacts positively on identifying the mention surrey as another cricket club via the common neighbor mention Essex."
],
"extractive_spans": [],
"free_form_answer": "More than that in some cases (next to adjacent) ",
"highlighted_evidence": [
"Formally, we define neighbor mentions as INLINEFORM0 adjacent mentions before and after current mention INLINEFORM1 : INLINEFORM2 , where INLINEFORM3 is the pre-defined window size. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"Which datasets do they use?",
"How effective is their NCEL approach overall?",
"How do they verify generalization ability?",
"Do they only use adjacent entity mentions or use more than that in some cases (next to adjacent)?"
],
"question_id": [
"bb3267c3f0a12d8014d51105de5d81686afe5f1b",
"114934e1a1e818630ff33ac5c4cd4be6c6f75bb2",
"2439b6b92d73f660fe6af8d24b7bbecf2b3a3d72",
"b8d0e4e0e820753ffc107c1847fe1dfd48883989"
],
"question_writer": [
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: Illustration of named entity disambiguation for three mentions England, Hussain, and Essex. The nodes linked by arrowed lines are the candidate entities, where red solid lines denote target entities.",
"Figure 2: Framework of NCEL. The inputs of a set of mentions in a document are listed in the left side. The words in red indicate the current mention mi, where mi−1,mi+1 are neighbor mentions, and Φ(mi) = {ei1, ei2, ei3} denotes the candidate entity set for mi.",
"Table 1: Micro F1 (above) and Macro F1 (bottom) on GERBIL.",
"Table 2: Precision on WW",
"Table 4: Qualitative Analysis of the Example England."
],
"file": [
"2-Figure1-1.png",
"4-Figure2-1.png",
"8-Table1-1.png",
"9-Table2-1.png",
"9-Table4-1.png"
]
} | [
"How do they verify generalization ability?",
"Do they only use adjacent entity mentions or use more than that in some cases (next to adjacent)?"
] | [
[
"1811.08603-Results on TAC2010 and WW-1",
"1811.08603-Experiments-0",
"1811.08603-Training Details and Running Time Analysis-0"
],
[
"1811.08603-Global Features-1",
"1811.08603-Training Details and Running Time Analysis-1"
]
] | [
"by evaluating their model on five different benchmarks",
"More than that in some cases (next to adjacent) "
] | 166 |
1909.03135 | To lemmatize or not to lemmatize: how word normalisation affects ELMo performance in word sense disambiguation | We critically evaluate the widespread assumption that deep learning NLP models do not require lemmatized input. To test this, we trained versions of contextualised word embedding ELMo models on raw tokenized corpora and on the corpora with word tokens replaced by their lemmas. Then, these models were evaluated on the word sense disambiguation task. This was done for the English and Russian languages. ::: The experiments showed that while lemmatization is indeed not necessary for English, the situation is different for Russian. It seems that for rich-morphology languages, using lemmatized training and testing data yields small but consistent improvements: at least for word sense disambiguation. This means that the decisions about text pre-processing before training ELMo should consider the linguistic nature of the language in question. | {
"paragraphs": [
[
"Deep contextualised representations of linguistic entities (words and/or sentences) are used in many current state-of-the-art NLP systems. The most well-known examples of such models are arguably ELMo BIBREF0 and BERT BIBREF1.",
"A long-standing tradition if the field of applying deep learning to NLP tasks can be summarised as follows: as minimal pre-processing as possible. It is widely believed that lemmatization or other text input normalisation is not necessary. Advanced neural architectures based on character input (CNNs, BPE, etc) are supposed to be able to learn how to handle spelling and morphology variations themselves, even for languages with rich morphology: `just add more layers!'. Contextualised embedding models follow this tradition: as a rule, they are trained on raw text collections, with minimal linguistic pre-processing. Below, we show that this is not entirely true.",
"It is known that for the previous generation of word embedding models (`static' ones like word2vec BIBREF2, where a word always has the same representation regardless of the context in which it occurs), lemmatization of the training and testing data improves their performance. BIBREF3 showed that this is true at least for semantic similarity and analogy tasks.",
"In this paper, we describe our experiments in finding out whether lemmatization helps modern contextualised embeddings (on the example of ELMo). We compare the performance of ELMo models trained on the same corpus before and after lemmatization. It is impossible to evaluate contextualised models on `static' tasks like lexical semantic similarity or word analogies. Because of this, we turned to word sense disambiguation in context (WSD) as an evaluation task.",
"In brief, we use contextualised representations of ambiguous words from the top layer of an ELMo model to train word sense classifiers and find out whether using lemmas instead of tokens helps in this task (see Section SECREF5). We experiment with the English and Russian languages and show that they differ significantly in the influence of lemmatization on the WSD performance of ELMo models.",
"Our findings and the contributions of this paper are:",
"Linguistic text pre-processing still matters in some tasks, even for contemporary deep representation learning algorithms.",
"For the Russian language, with its rich morphology, lemmatizing the training and testing data for ELMo representations yields small but consistent improvements in the WSD task. This is unlike English, where the differences are negligible."
],
[
"ELMo contextual word representations are learned in an unsupervised way through language modelling BIBREF0. The general architecture consists of a two-layer BiLSTM on top of a convolutional layer which takes character sequences as its input. Since the model uses fully character-based token representations, it avoids the problem of out-of-vocabulary words. Because of this, the authors explicitly recommend not to use any normalisation except tokenization for the input text. However, as we show below, while this is true for English, for other languages feeding ELMo with lemmas instead of raw tokens can improve WSD performance.",
"Word sense disambiguation or WSD BIBREF4 is the NLP task consisting of choosing a word sense from a pre-defined sense inventory, given the context in which the word is used. WSD fits well into our aim to intrinsically evaluate ELMo models, since solving the problem of polysemy and homonymy was one of the original promises of contextualised embeddings: their primary difference from the previous generation of word embedding models is that contextualised approaches generate different representations for homographs depending on the context. We use two lexical sample WSD test sets, further described in Section SECREF4."
],
[
"For the experiments described below, we trained our own ELMo models from scratch. For English, the training corpus consisted of the English Wikipedia dump from February 2017. For Russian, it was a concatenation of the Russian Wikipedia dump from December 2018 and the full Russian National Corpus (RNC). The RNC texts were added to the Russian Wikipedia dump so as to make the Russian training corpus more comparable in size to the English one (Wikipedia texts would comprise only half of the size). As Table TABREF3 shows, the English Wikipedia is still two times larger, but at least the order is the same.",
"The texts were tokenized and lemmatized with the UDPipe models for the respective languages trained on the Universal Dependencies 2.3 treebanks BIBREF5. UDPipe yields lemmatization accuracy about 96% for English and 97% for Russian; thus for the task at hand, we considered it to be gold and did not try to further improve the quality of normalisation itself (although it is not entirely error-free, see Section SECREF4).",
"ELMo models were trained on these corpora using the original TensorFlow implementation, for 3 epochs with batch size 192, on two GPUs. To train faster, we decreased the dimensionality of the LSTM layers from the default 4096 to 2048 for all the models."
],
[
"We used two WSD datasets for evaluation:",
"Senseval-3 for English BIBREF6",
"RUSSE'18 for Russian BIBREF7",
"The Senseval-3 dataset consists of lexical samples for nouns, verbs and adjectives; we used only noun target words:",
"argument",
"arm",
"atmosphere",
"audience",
"bank",
"degree",
"difference",
"difficulty",
"disc",
"image",
"interest",
"judgement",
"organization",
"paper",
"party",
"performance",
"plan",
"shelter",
"sort",
"source",
"An example for the ambiguous word argument is given below:",
"In some situations Postscript can be faster than the escape sequence type of printer control file. It uses post fix notation, where arguments come first and operators follow. This is basically the same as Reverse Polish Notation as used on certain calculators, and follows directly from the stack based approach.",
"It this sentence, the word `argument' is used in the sense of a mathematical operator.",
"The RUSSE'18 dataset was created in 2018 for the shared task in Russian word sense induction. This dataset contains only nouns; the list of words with their English translations is given in Table TABREF30.",
"Originally, it includes also the words russianбайка `tale/fleece' and russianгвоздика 'clove/small nail', but their senses are ambiguous only in some inflectional forms (not in lemmas), therefore we decided to exclude these words from evaluation.",
"The Russian dataset is more homogeneous compared to the English one, as for all the target words there is approximately the same number of context words in the examples. This is achieved by applying the lexical window (25 words before and after the target word) and cropping everything that falls outside of that window. In the English dataset, on the contrary, the whole paragraph with the target word is taken into account. We have tried cropping the examples for English as well, but it did not result in any change in the quality of classification. In the end, we decided not to apply the lexical window to the English dataset so as not to alter it and rather use it in the original form.",
"Here is an example from the RUSSE'18 for the ambiguous word russianмандарин `mandarin' in the sense `Chinese official title':",
"russian“...дипломатического корпуса останкам богдыхана и императрицы обставлено было с необычайной торжественностью. Тысячи мандаринов и других высокопоставленных лиц разместились шпалерами на трех мраморных террасах ведущих к...”",
"`...the diplomatic bodies of the Bogdikhan and the Empress was furnished with extraordinary solemnity. Thousands of mandarins and other dignitaries were placed on three marble terraces leading to...'.",
"Table TABREF31 compares both datasets. Before usage, they were pre-processed in the same way as the training corpora for ELMo (see Section SECREF3), thus producing a lemmatized and a non-lemmatized versions of each.",
"As we can see from Table TABREF31, for 20 target words in English there are 24 lemmas, and for 18 target words in Russian there are 36 different lemmas. These numbers are explained by occasional errors in the UDPipe lemmatization. Another interesting thing to observe is the number of distinct word forms for every language. For English, there are 39 distinct forms for 20 target nouns: singular and plural for every noun, except `atmosphere' which is used only in the singular form. Thus, inflectional variability of English nouns is covered by the dataset almost completely. For Russian, we observe 132 distinct forms for 18 target nouns, giving more than 7 inflectional forms per each word. Note that this still covers only half of all the inflectional variability of Russian: this language features 12 distinct forms for each noun (6 cases and 2 numbers).",
"To sum up, the RUSSE'18 dataset is morphologically far more complex than the Senseval3, reflecting the properties of the respective languages. In the next section we will see that this leads to substantial differences regarding comparisons between token-based and lemma-based ELMo models."
],
[
"Following BIBREF8, we decided to avoid using any standard train-test splits for our WSD datasets. Instead, we rely on per-word random splits and 5-fold cross-validation. This means that for each target word we randomly generate 5 different divisions of its context sentences list into train and test sets, and then train and test 5 different classifier models on this data. The resulting performance score for each target word is the average of 5 macro-F1 scores produced by these classifiers.",
"ELMo models can be employed for the WSD task in two different ways: either by fine-tuning the model or by extracting word representations from it and then using them as features in a downstream classifier. We decided to stick to the second (feature extraction) approach, since it is conceptually and computationally simpler. Additionally, BIBREF9 showed that for most NLP tasks (except those focused on sentence pairs) the performance of feature extraction and fine-tuning is nearly the same. Thus we extracted the single vector of the target word from the ELMo top layer (`target' rows in Table TABREF32) or the averaged ELMo top layer vectors of all words in the context sentence (`averaged' rows in Table TABREF32).",
"For comparison, we also report the scores of the `averaged vectors' representations with Continuous Skipgram BIBREF2 embedding models trained on the English or Russian Wikipedia dumps (`SGNS' rows): before the advent of contextualised models, this was one of the most widely used ways to `squeeze' the meaning of a sentence into a fixed-size vector. Of course it does not mean that the meaning of a sentence always determines the senses all its words are used in. However, averaging representations of words in contexts as a proxy to the sense of one particular word is a long established tradition in WSD, starting at least from BIBREF10. Also, since SGNS is a `static' embedding model, it is of course not possible to use only target word vectors as features: they would be identical whatever the context is.",
"Simple logistic regression was used as a classification algorithm. We also tested a multi-layer perceptron (MLP) classifier with 200-neurons hidden layer, which yielded essentially the same results. This leads us to believe that our findings are not classifier-dependent.",
"Table TABREF32 shows the results, together with the random and most frequent sense (MFS) baselines for each dataset.",
"First, ELMo outperforms SGNS for both languages, which comes as no surprise. Second, the approach with averaging representations from all words in the sentence is not beneficial for WSD with ELMo: for English data, it clearly loses to a single target word representation, and for Russian there are no significant differences (and using a single target word is preferable from the computational point of view, since it does not require the averaging operation). Thus, below we discuss only the single target word usage mode of ELMo.",
"But the most important part is the comparison between using tokens or lemmas in the train and test data. For the `static' SGNS embeddings, it does not significantly change the WSD scores for both languages. The same is true for English ELMo models, where differences are negligible and seem to be simple fluctuations. However, for Russian, ELMo (target) on lemmas outperforms ELMo on tokens, with small but significant improvement. The most plausible explanation for this is that (despite of purely character-based input of ELMo) the model does not have to learn idiosyncrasies of a particular language morphology. Instead, it can use its (limited) capacity to better learn lexical semantic structures, leading to better WSD performance. The box plots FIGREF33 and FIGREF35 illustrate the scores dispersion across words in the test sets for English and Russian correspondingly (orange lines are medians). In the next section SECREF6 we analyse the results qualitatively."
],
[
"In this section we focus on the comparison of scores for the Russian dataset. The classifier for Russian had to choose between fewer classes (two or three), which made the scores higher and more consistent than for the English dataset. Overall, we see improvements in the scores for the majority of words, which proves that lemmatization for morphologically rich languages is beneficial.",
"We decided to analyse more closely those words for which the difference in the scores between lemma-based and token-based models was statistically significant. By `significant' we mean that the scores differ by more that one standard deviation (the largest standard deviation value in the two sets was taken). The resulting list of targets words with significant difference in scores is given in Table TABREF36.",
"We can see that among 18 words in the dataset only 3 exhibit significant improvement in their scores when moving from tokens to lemmas in the input data. It shows that even though the overall F1 scores for the Russian data have shown the plausibility of lemmatization, this improvement is mostly driven by a few words. It should be noted that these words' scores feature very low standard deviation values (for other words, standard deviation values were above 0.1, making F1 differences insignificant). Such a behaviour can be caused by more consistent differentiation of context for various senses of these 3 words. For example, with the word russianкабачок `squash / small restaurant', the contexts for both senses can be similar, since they are all related to food. This makes the WSD scores unstable. On the other hand, for russianакция `stock, share / event', russianкрона `crown (tree / coin)' or russianкруп `croup (horse body part / illness)', their senses are not related, which resulted in more stable results and significant difference in the scores (see Table TABREF36).",
"There is only one word in the RUSSE'18 dataset for which the score has strongly decreased when moving to lemma-based models: russianдомино `domino (game / costume)'. In fact, the score difference here lies on the border of one standard deviation, so strictly speaking it is not really significant. However, the word still presents an interesting phenomenon.",
"russianДомино is the only target noun in the RUSSE'18 that has no inflected forms, since it is a borrowed word. This leaves no room for improvement when using lemma-based ELMo models: all tokens of this word are already identical. At the same time, some information about inflected word forms in the context can be useful, but it is lost during lemmatization, and this leads to the decreased score. Arguably, this means that lemmatization brings along both advantages and disadvantages for WSD with ELMo. For inflected words (which constitute the majority of Russian vocabulary) profits outweigh the losses, but for atypical non-changeable words it can be the opposite.",
"The scores for the excluded target words russianбайка `tale / fleece' and russianгвоздика 'clove / small nail' are given in Table TABREF37 (recall that they were excluded because of being ambiguous only in some inflectional forms). For these words we can see a great improvement with lemma-based models. This, of course stems from the fact that these words in different senses have different lemmas. Therefore, the results are heavily dependent on the quality of lemmatization."
],
[
"We evaluated how the ability of ELMo contextualised word embedding models to disambiguate word senses depends on the nature of the training data. In particular, we compared the models trained on raw tokenized corpora and those trained on the corpora with word tokens replaced by their normal forms (lemmas). The models we trained are publicly available via the NLPL word embeddings repository BIBREF3.",
"In the majority of research papers on deep learning approaches to NLP, it is assumed that lemmatization is not necessary, especially when using powerful contextualised embeddings. Our experiments show that this is indeed true for languages with simple morphology (like English). However, for rich-morphology languages (like Russian), using lemmatized training data yields small but consistent improvements in the word sense disambiguation task. These improvements are not observed for rare words which lack inflected forms; this further supports our hypothesis that better WSD scores of lemma-based models are related to them better handling multiple word forms in morphology-rich languages.",
"Of course, lemmatization is by all means not a silver bullet. In other tasks, where inflectional properties of words are important, it can even hurt the performance. But this is true for any NLP systems, not only deep learning based ones.",
"The take-home message here is twofold: first, text pre-processing still matters for contemporary deep learning algorithms. Their impressive learning abilities do not always allow them to infer normalisation rules themselves, from simply optimising the language modelling task. Second, the nature of language at hand matters as well, and differences in this nature can result in different decisions being optimal or sub-optimal at the stage of deep learning models training. The simple truth `English is not representative of all languages on Earth' still holds here.",
"In the future, we plan to extend our work by including more languages into the analysis. Using Russian and English allowed us to hypothesise about the importance of morphological character of a language. But we only scratched the surface of the linguistic diversity. To verify this claim, it is necessary to analyse more strongly inflected languages like Russian as well as more weakly inflected (analytical) languages similar to English. This will help to find out if the inflection differences are important for training deep learning models across human languages in general."
]
],
"section_name": [
"Introduction",
"Related work",
"Training ELMo",
"Word sense disambiguation test sets",
"Experiments",
"Qualitative analysis",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"6bff9ac6ac58075ee5fb0df2f238c900eec8f91d",
"8e82dec7c80946d23dbb90aac913fc2991fbe995"
],
"answer": [
{
"evidence": [
"russianДомино is the only target noun in the RUSSE'18 that has no inflected forms, since it is a borrowed word. This leaves no room for improvement when using lemma-based ELMo models: all tokens of this word are already identical. At the same time, some information about inflected word forms in the context can be useful, but it is lost during lemmatization, and this leads to the decreased score. Arguably, this means that lemmatization brings along both advantages and disadvantages for WSD with ELMo. For inflected words (which constitute the majority of Russian vocabulary) profits outweigh the losses, but for atypical non-changeable words it can be the opposite."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"At the same time, some information about inflected word forms in the context can be useful, but it is lost during lemmatization, and this leads to the decreased score. Arguably, this means that lemmatization brings along both advantages and disadvantages for WSD with ELMo. "
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"russianДомино is the only target noun in the RUSSE'18 that has no inflected forms, since it is a borrowed word. This leaves no room for improvement when using lemma-based ELMo models: all tokens of this word are already identical. At the same time, some information about inflected word forms in the context can be useful, but it is lost during lemmatization, and this leads to the decreased score. Arguably, this means that lemmatization brings along both advantages and disadvantages for WSD with ELMo. For inflected words (which constitute the majority of Russian vocabulary) profits outweigh the losses, but for atypical non-changeable words it can be the opposite."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"At the same time, some information about inflected word forms in the context can be useful, but it is lost during lemmatization, and this leads to the decreased score. Arguably, this means that lemmatization brings along both advantages and disadvantages for WSD with ELMo. "
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"4777f06304cf7779e0285d51a80f45fddb26e499",
"785a0e64a5190a8e5704df05400526cb9f45ae17"
],
"answer": [
{
"evidence": [
"For the Russian language, with its rich morphology, lemmatizing the training and testing data for ELMo representations yields small but consistent improvements in the WSD task. This is unlike English, where the differences are negligible."
],
"extractive_spans": [
"Russian"
],
"free_form_answer": "",
"highlighted_evidence": [
"For the Russian language, with its rich morphology, lemmatizing the training and testing data for ELMo representations yields small but consistent improvements in the WSD task. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The RUSSE'18 dataset was created in 2018 for the shared task in Russian word sense induction. This dataset contains only nouns; the list of words with their English translations is given in Table TABREF30.",
"To sum up, the RUSSE'18 dataset is morphologically far more complex than the Senseval3, reflecting the properties of the respective languages. In the next section we will see that this leads to substantial differences regarding comparisons between token-based and lemma-based ELMo models."
],
"extractive_spans": [
"Russian"
],
"free_form_answer": "",
"highlighted_evidence": [
"The RUSSE'18 dataset was created in 2018 for the shared task in Russian word sense induction. ",
"To sum up, the RUSSE'18 dataset is morphologically far more complex than the Senseval3, reflecting the properties of the respective languages. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"d1a1ed60da633064be199f1f3274ec6969e74f78"
],
"answer": [
{
"evidence": [
"A long-standing tradition if the field of applying deep learning to NLP tasks can be summarised as follows: as minimal pre-processing as possible. It is widely believed that lemmatization or other text input normalisation is not necessary. Advanced neural architectures based on character input (CNNs, BPE, etc) are supposed to be able to learn how to handle spelling and morphology variations themselves, even for languages with rich morphology: `just add more layers!'. Contextualised embedding models follow this tradition: as a rule, they are trained on raw text collections, with minimal linguistic pre-processing. Below, we show that this is not entirely true."
],
"extractive_spans": [],
"free_form_answer": "Advanced neural architectures and contextualized embedding models learn how to handle spelling and morphology variations.",
"highlighted_evidence": [
"Advanced neural architectures based on character input (CNNs, BPE, etc) are supposed to be able to learn how to handle spelling and morphology variations themselves, even for languages with rich morphology: `just add more layers!'. Contextualised embedding models follow this tradition: as a rule, they are trained on raw text collections, with minimal linguistic pre-processing. Below, we show that this is not entirely true."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"bbc423e6864e382fa79fba935bc01049ffa623e5",
"c472f23f24299a1f23d06ff5f3f53a66d7521053"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 1: Training corpora",
"For the experiments described below, we trained our own ELMo models from scratch. For English, the training corpus consisted of the English Wikipedia dump from February 2017. For Russian, it was a concatenation of the Russian Wikipedia dump from December 2018 and the full Russian National Corpus (RNC). The RNC texts were added to the Russian Wikipedia dump so as to make the Russian training corpus more comparable in size to the English one (Wikipedia texts would comprise only half of the size). As Table TABREF3 shows, the English Wikipedia is still two times larger, but at least the order is the same."
],
"extractive_spans": [],
"free_form_answer": "2174000000, 989000000",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Training corpora",
"For the experiments described below, we trained our own ELMo models from scratch. For English, the training corpus consisted of the English Wikipedia dump from February 2017. For Russian, it was a concatenation of the Russian Wikipedia dump from December 2018 and the full Russian National Corpus (RNC). "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"For the experiments described below, we trained our own ELMo models from scratch. For English, the training corpus consisted of the English Wikipedia dump from February 2017. For Russian, it was a concatenation of the Russian Wikipedia dump from December 2018 and the full Russian National Corpus (RNC). The RNC texts were added to the Russian Wikipedia dump so as to make the Russian training corpus more comparable in size to the English one (Wikipedia texts would comprise only half of the size). As Table TABREF3 shows, the English Wikipedia is still two times larger, but at least the order is the same.",
"FLOAT SELECTED: Table 1: Training corpora"
],
"extractive_spans": [],
"free_form_answer": "2174 million tokens for English and 989 million tokens for Russian",
"highlighted_evidence": [
"For the experiments described below, we trained our own ELMo models from scratch. For English, the training corpus consisted of the English Wikipedia dump from February 2017. For Russian, it was a concatenation of the Russian Wikipedia dump from December 2018 and the full Russian National Corpus (RNC). ",
"As Table TABREF3 shows, the English Wikipedia is still two times larger, but at least the order is the same.",
"FLOAT SELECTED: Table 1: Training corpora"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"Do the authors mention any downside of lemmatizing input before training ELMo?",
"What other examples of morphologically-rich languages do the authors give?",
"Why is lemmatization not necessary in English?",
"How big was the corpora they trained ELMo on?"
],
"question_id": [
"5aa12b4063d6182a71870c98e4e1815ff3dc8a72",
"22815878083ebd2f9e08bc33a5e733063dac7a0f",
"220d11a03897d85af91ec88a9b502815c7d2b6f3",
"d509081673f5667060400eb325a8050fa5db7cc8"
],
"question_writer": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
],
"search_query": [
"ELMo",
"ELMo",
"ELMo",
"ELMo"
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Table 1: Training corpora",
"Table 2: Target ambiguous words for Russian (RUSSE’18)",
"Table 3: Characteristics of the WSD datasets. The numbers in the lower part are average values.",
"Table 4: Averaged macro-F1 scores for WSD",
"Figure 1: Word sense disambiguation performance on the English data across words (ELMo target models).",
"Figure 2: Word sense disambiguation performance on the Russian data across words (ELMo target models).",
"Table 5: F1 scores for target words from RUSSE’18 with significant differences between lemma-based and token-based models",
"Table 6: F1 scores for the excluded target words from RUSSE’18."
],
"file": [
"2-Table1-1.png",
"3-Table2-1.png",
"4-Table3-1.png",
"4-Table4-1.png",
"5-Figure1-1.png",
"5-Figure2-1.png",
"5-Table5-1.png",
"6-Table6-1.png"
]
} | [
"Why is lemmatization not necessary in English?",
"How big was the corpora they trained ELMo on?"
] | [
[
"1909.03135-Introduction-1"
],
[
"1909.03135-Training ELMo-0",
"1909.03135-2-Table1-1.png"
]
] | [
"Advanced neural architectures and contextualized embedding models learn how to handle spelling and morphology variations.",
"2174 million tokens for English and 989 million tokens for Russian"
] | 167 |
1804.07789 | Generating Descriptions from Structured Data Using a Bifocal Attention Mechanism and Gated Orthogonalization | In this work, we focus on the task of generating natural language descriptions from a structured table of facts containing fields (such as nationality, occupation, etc) and values (such as Indian, actor, director, etc). One simple choice is to treat the table as a sequence of fields and values and then use a standard seq2seq model for this task. However, such a model is too generic and does not exploit task-specific characteristics. For example, while generating descriptions from a table, a human would attend to information at two levels: (i) the fields (macro level) and (ii) the values within the field (micro level). Further, a human would continue attending to a field for a few timesteps till all the information from that field has been rendered and then never return back to this field (because there is nothing left to say about it). To capture this behavior we use (i) a fused bifocal attention mechanism which exploits and combines this micro and macro level information and (ii) a gated orthogonalization mechanism which tries to ensure that a field is remembered for a few time steps and then forgotten. We experiment with a recently released dataset which contains fact tables about people and their corresponding one line biographical descriptions in English. In addition, we also introduce two similar datasets for French and German. Our experiments show that the proposed model gives 21% relative improvement over a recently proposed state of the art method and 10% relative improvement over basic seq2seq models. The code and the datasets developed as a part of this work are publicly available. | {
"paragraphs": [
[
"Rendering natural language descriptions from structured data is required in a wide variety of commercial applications such as generating descriptions of products, hotels, furniture, etc., from a corresponding table of facts about the entity. Such a table typically contains {field, value} pairs where the field is a property of the entity (e.g., color) and the value is a set of possible assignments to this property (e.g., color = red). Another example of this is the recently introduced task of generating one line biography descriptions from a given Wikipedia infobox BIBREF0 . The Wikipedia infobox serves as a table of facts about a person and the first sentence from the corresponding article serves as a one line description of the person. Figure FIGREF2 illustrates an example input infobox which contains fields such as Born, Residence, Nationality, Fields, Institutions and Alma Mater. Each field further contains some words (e.g., particle physics, many-body theory, etc.). The corresponding description is coherent with the information contained in the infobox.",
"Note that the number of fields in the infobox and the ordering of the fields within the infobox varies from person to person. Given the large size (700K examples) and heterogeneous nature of the dataset which contains biographies of people from different backgrounds (sports, politics, arts, etc.), it is hard to come up with simple rule-based templates for generating natural language descriptions from infoboxes, thereby making a case for data-driven models. Based on the recent success of data-driven neural models for various other NLG tasks BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , one simple choice is to treat the infobox as a sequence of {field, value} pairs and use a standard seq2seq model for this task. However, such a model is too generic and does not exploit the specific characteristics of this task as explained below. First, note that while generating such descriptions from structured data, a human keeps track of information at two levels. Specifically, at a macro level, she would first decide which field to mention next and then at a micro level decide which of the values in the field needs to be mentioned next. For example, she first decides that at the current step, the field occupation needs attention and then decides which is the next appropriate occupation to attend to from the set of occupations (actor, director, producer, etc.). To enable this, we use a bifocal attention mechanism which computes an attention over fields at a macro level and over values at a micro level. We then fuse these attention weights such that the attention weight for a field also influences the attention over the values within it. Finally, we feed a fused context vector to the decoder which contains both field level and word level information. Note that such two-level attention mechanisms BIBREF6 , BIBREF7 , BIBREF8 have been used in the context of unstructured data (as opposed to structured data in our case), where at a macro level one needs to pay attention to sentences and at a micro level to words in the sentences.",
"Next, we observe that while rendering the output, once the model pays attention to a field (say, occupation) it needs to stay on this field for a few timesteps (till all the occupations are produced in the output). We refer to this as the stay on behavior. Further, we note that once the tokens of a field are referred to, they are usually not referred to later. For example, once all the occupations have been listed in the output we will never visit the occupation field again because there is nothing left to say about it. We refer to this as the never look back behavior. To model the stay on behaviour, we introduce a forget (or remember) gate which acts as a signal to decide when to forget the current field (or equivalently to decide till when to remember the current field). To model the never look back behaviour we introduce a gated orthogonalization mechanism which ensures that once a field is forgotten, subsequent field context vectors fed to the decoder are orthogonal to (or different from) the previous field context vectors.",
"We experiment with the WikiBio dataset BIBREF0 which contains around 700K {infobox, description} pairs and has a vocabulary of around 400K words. We show that the proposed model gives a relative improvement of 21% and 20% as compared to current state of the art models BIBREF0 , BIBREF9 on this dataset. The proposed model also gives a relative improvement of 10% as compared to the basic seq2seq model. Further, we introduce new datasets for French and German on the same lines as the English WikiBio dataset. Even on these two datasets, our model outperforms the state of the art methods mentioned above."
],
[
"Natural Language Generation has always been of interest to the research community and has received a lot of attention in the past. The approaches for NLG range from (i) rule based approaches (e.g., BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 ) (ii) modular statistical approaches which divide the process into three phases (planning, selection and surface realization) and use data driven approaches for one or more of these phases BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 , BIBREF19 (iii) hybrid approaches which rely on a combination of handcrafted rules and corpus statistics BIBREF20 , BIBREF21 , BIBREF22 and (iv) the more recent neural network based models BIBREF1 .",
"Neural models for NLG have been proposed in the context of various tasks such as machine translation BIBREF1 , document summarization BIBREF2 , BIBREF4 , paraphrase generation BIBREF23 , image captioning BIBREF24 , video summarization BIBREF25 , query based document summarization BIBREF5 and so on. Most of these models are data hungry and are trained on large amounts of data. On the other hand, NLG from structured data has largely been studied in the context of small datasets such as WeatherGov BIBREF26 , RoboCup BIBREF27 , NFL Recaps BIBREF15 , Prodigy-Meteo BIBREF28 and TUNA Challenge BIBREF29 . Recently weather16 proposed RNN/LSTM based neural encoder-decoder models with attention for WeatherGov and RoboCup datasets.",
"Unlike the datasets mentioned above, the biography dataset introduced by lebret2016neural is larger (700K {table, descriptions} pairs) and has a much larger vocabulary (400K words as opposed to around 350 or fewer words in the above datasets). Further, unlike the feed-forward neural network based model proposed by BIBREF0 we use a sequence to sequence model and introduce components to address the peculiar characteristics of the task. Specifically, we introduce neural components to address the need for attention at two levels and to address the stay on and never look back behaviour required by the decoder. KiddonZC16 have explored the use of checklists to track previously visited ingredients while generating recipes from ingredients. Note that two-level attention mechanisms have also been used in the context of summarization BIBREF6 , document classification BIBREF7 , dialog systems BIBREF8 , etc. However, these works deal with unstructured data (sentences at the higher level and words at a lower level) as opposed to structured data in our case."
],
[
"As input we are given an infobox INLINEFORM0 , which is a set of pairs INLINEFORM1 where INLINEFORM2 corresponds to field names and INLINEFORM3 is the sequence of corresponding values and INLINEFORM4 is the total number of fields in INLINEFORM5 . For example, INLINEFORM6 could be one such pair in this set. Given such an input, the task is to generate a description INLINEFORM7 containing INLINEFORM8 words. A simple solution is to treat the infobox as a sequence of fields followed by the values corresponding to the field in the order of their appearance in the infobox. For example, the infobox could be flattened to produce the following input sequence (the words in bold are field names which act as delimiters)",
"[Name] John Doe [Birth_Date] 19 March 1981 [Nationality] Indian .....",
"The problem can then be cast as a seq2seq generation problem and can be modeled using a standard neural architecture comprising of three components (i) an input encoder (using GRU/LSTM cells), (ii) an attention mechanism to attend to important values in the input sequence at each time step and (iii) a decoder to decode the output one word at a time (again, using GRU/LSTM cells). However, this standard model is too generic and does not exploit the specific characteristics of this task. We propose additional components, viz., (i) a fused bifocal attention mechanism which operates on fields (macro) and values (micro) and (ii) a gated orthogonalization mechanism to model stay on and never look back behavior."
],
[
"Intuitively, when a human writes a description from a table she keeps track of information at two levels. At the macro level, it is important to decide which is the appropriate field to attend to next and at a micro level (i.e., within a field) it is important to know which values to attend to next. To capture this behavior, we use a bifocal attention mechanism as described below.",
"Macro Attention: Consider the INLINEFORM0 -th field INLINEFORM1 which has values INLINEFORM2 . Let INLINEFORM3 be the representation of this field in the infobox. This representation can either be (i) the word embedding of the field name or (ii) some function INLINEFORM4 of the values in the field or (iii) a concatenation of (i) and (ii). The function INLINEFORM5 could simply be the sum or average of the embeddings of the values in the field. Alternately, this function could be a GRU (or LSTM) which treats these values within a field as a sequence and computes the field representation as the final representation of this sequence (i.e., the representation of the last time-step). We found that bidirectional GRU is a better choice for INLINEFORM6 and concatenating the embedding of the field name with this GRU representation works best. Further, using a bidirectional GRU cell to take contextual information from neighboring fields also helps (these are the orange colored cells in the top-left block in Figure FIGREF3 with macro attention). Given these representations INLINEFORM7 for all the INLINEFORM8 fields we compute an attention over the fields (macro level). DISPLAYFORM0 ",
"where INLINEFORM0 is the state of the decoder at time step INLINEFORM1 . INLINEFORM2 and INLINEFORM3 are parameters, INLINEFORM4 is the total number of fields in the input, INLINEFORM5 is the macro (field level) context vector at the INLINEFORM6 -th time step of the decoder.",
"Micro Attention: Let INLINEFORM0 be the representation of the INLINEFORM1 -th value in a given field. This representation could again either be (i) simply the embedding of this value (ii) or a contextual representation computed using a function INLINEFORM2 which also considers the other values in the field. For example, if INLINEFORM3 are the values in a field then these values can be treated as a sequence and the representation of the INLINEFORM4 -th value can be computed using a bidirectional GRU over this sequence. Once again, we found that using a bi-GRU works better then simply using the embedding of the value. Once we have such a representation computed for all values across all the fields, we compute the attention over these values (micro level) as shown below : DISPLAYFORM0 ",
" where INLINEFORM0 is the state of the decoder at time step INLINEFORM1 . INLINEFORM2 and INLINEFORM3 are parameters, INLINEFORM4 is the total number of values across all the fields.",
"Fused Attention: Intuitively, the attention weights assigned to a field should have an influence on all the values belonging to the particular field. To ensure this, we reweigh the micro level attention weights based on the corresponding macro level attention weights. In other words, we fuse the attention weights at the two levels as: DISPLAYFORM0 ",
" where INLINEFORM0 is the field corresponding to the INLINEFORM1 -th value, INLINEFORM2 is the macro level context vector."
],
[
"We now describe a series of choices made to model stay-on and never look back behavior. We first begin with the stay-on property which essentially implies that if we have paid attention to the field INLINEFORM0 at timestep INLINEFORM1 then we are likely to pay attention to the same field for a few more time steps. For example, if we are focusing on the occupation field at this timestep then we are likely to focus on it for the next few timesteps till all relevant values in this field have been included in the generated description. In other words, we want to remember the field context vector INLINEFORM2 for a few timesteps. One way of ensuring this is to use a remember (or forget) gate as given below which remembers the previous context vector when required and forgets it when it is time to move on from that field. DISPLAYFORM0 ",
"where INLINEFORM0 are parameters to be learned. The job of the forget gate is to ensure that INLINEFORM1 is similar to INLINEFORM2 when required (i.e., by learning INLINEFORM3 when we want to continue focusing on the same field) and different when it is time to move on (by learning that INLINEFORM4 ).",
"Next, the never look back property implies that once we have moved away from a field we are unlikely to pay attention to it again. For example, once we have rendered all the occupations in the generated description there is no need to return back to the occupation field. In other words, once we have moved on ( INLINEFORM0 ), we want the successive field context vectors INLINEFORM1 to be very different from the previous field vectors INLINEFORM2 . One way of ensuring this is to orthogonalize successive field vectors using DISPLAYFORM0 ",
"where INLINEFORM0 is the dot product between vectors INLINEFORM1 and INLINEFORM2 . The above equation essentially subtracts the component of INLINEFORM3 along INLINEFORM4 . INLINEFORM5 is a learned parameter which controls the degree of orthogonalization thereby allowing a soft orthogonalization (i.e., the entire component along INLINEFORM6 is not subtracted but only a fraction of it). The above equation only ensures that INLINEFORM7 is soft-orthogonal to INLINEFORM8 . Alternately, we could pass the sequence of context vectors, INLINEFORM9 generated so far through a GRU cell. The state of this GRU cell at each time step would thus be aware of the history of the field vectors till that timestep. Now instead of orthogonalizing INLINEFORM10 to INLINEFORM11 we could orthogonalize INLINEFORM12 to the hidden state of this GRU at time-step INLINEFORM13 . In practice, we found this to work better as it accounts for all the field vectors in the history instead of only the previous field vector.",
"In summary, Equation provides a mechanism for remembering the current field vector when appropriate (thus capturing stay-on behavior) using a remember gate. On the other hand, Equation EQREF10 explicitly ensures that the field vector is very different (soft-orthogonal) from the previous field vectors once it is time to move on (thus capturing never look back behavior). The value of INLINEFORM0 computed in Equation EQREF10 is then used in Equation . The INLINEFORM1 (macro) thus obtained is then concatenated with INLINEFORM2 (micro) and fed to the decoder (see Fig. FIGREF3 )"
],
[
"We now describe our experimental setup:"
],
[
"We use the WikiBio dataset introduced by lebret2016neural. It consists of INLINEFORM0 biography articles from English Wikipedia. A biography article corresponds to a person (sportsman, politician, historical figure, actor, etc.). Each Wikipedia article has an accompanying infobox which serves as the structured input and the task is to generate the first sentence of the article (which typically is a one-line description of the person). We used the same train, valid and test sets which were made publicly available by lebret2016neural.",
"We also introduce two new biography datasets, one in French and one in German. These datasets were created and pre-processed using the same procedure as outlined in lebret2016neural. Specifically, we extracted the infoboxes and the first sentence from the corresponding Wikipedia article. As with the English dataset, we split the French and German datasets randomly into train (80%), test (10%) and valid (10%). The French and German datasets extracted by us has been made publicly available. The number of examples was 170K and 50K and the vocabulary size was 297K and 143K for French and German respectively. Although in this work we focus only on generating descriptions in one language, we hope that this dataset will also be useful for developing models which jointly learn to generate descriptions from structured data in multiple languages."
],
[
"We compare with the following models:",
"1. BIBREF0 : This is a conditional language model which uses a feed-forward neural network to predict the next word in the description conditioned on local characteristics (i.e., words within a field) and global characteristics (i.e., overall structure of the infobox).",
"2. BIBREF9 : This model was proposed in the context of the WeatherGov and RoboCup datasets which have a much smaller vocabulary. They use an improved attention model with additional regularizer terms which influence the weights assigned to the fields.",
"3. Basic Seq2Seq: This is the vanilla encode-attend-decode model BIBREF1 . Further, to deal with the large vocabulary ( INLINEFORM0 400K words) we use a copying mechanism as a post-processing step. Specifically, we identify the time steps at which the decoder produces unknown words (denoted by the special symbol UNK). For each such time step, we look at the attention weights on the input words and replace the UNK word by that input word which has received maximum attention at this timestep. This process is similar to the one described in BIBREF30 . Even lebret2016neural have a copying mechanism tightly integrated with their model."
],
[
"We tuned the hyperparameters of all the models using a validation set. As mentioned earlier, we used a bidirectional GRU cell as the function INLINEFORM0 for computing the representation of the fields and the values (see Section SECREF4 ). For all the models, we experimented with GRU state sizes of 128, 256 and 512. The total number of unique words in the corpus is around 400K (this includes the words in the infobox and the descriptions). Of these, we retained only the top 20K words in our vocabulary (same as BIBREF0 ). We initialized the embeddings of these words with 300 dimensional Glove embeddings BIBREF31 . We used Adam BIBREF32 with a learning rate of INLINEFORM1 , INLINEFORM2 and INLINEFORM3 . We trained the model for a maximum of 20 epochs and used early stopping with the patience set to 5 epochs."
],
[
"We now discuss the results of our experiments."
],
[
"Following lebret2016neural, we used BLEU-4, NIST-4 and ROUGE-4 as the evaluation metrics. We first make a few observations based on the results on the English dataset (Table TABREF15 ). The basic seq2seq model, as well as the model proposed by weather16, perform better than the model proposed by lebret2016neural. Our final model with bifocal attention and gated orthogonalization gives the best performance and does 10% (relative) better than the closest baseline (basic seq2seq) and 21% (relative) better than the current state of the art method BIBREF0 . In Table TABREF16 , we show some qualitative examples of the output generated by different models."
],
[
"To make a qualitative assessment of the generated sentences, we conducted a human study on a sample of 500 Infoboxes which were sampled from English dataset. The annotators for this task were undergraduate and graduate students. For each of these infoboxes, we generated summaries using the basic seq2seq model and our final model with bifocal attention and gated orthogonalization. For each description and for each model, we asked three annotators to rank the output of the systems based on i) adequacy (i.e. does it capture relevant information from the infobox), (ii) fluency (i.e. grammar) and (iii) relative preference (i.e., which of the two outputs would be preferred). Overall the average fluency/adequacy (on a scale of 5) for basic seq2seq model was INLINEFORM0 and INLINEFORM1 for our model respectively.",
"The results from Table TABREF17 suggest that in general gated orthogonalization model performs better than the basic seq2seq model. Additionally, annotators were asked to verify if the generated summaries look natural (i.e, as if they were generated by humans). In 423 out of 500 cases, the annotators said “Yes” suggesting that gated orthogonalization model indeed produces good descriptions."
],
[
"The results on the French and German datasets are summarized in Tables TABREF20 and TABREF20 respectively. Note that the code of BIBREF0 is not publicly available, hence we could not report numbers for French and German using their model. We observe that our final model gives the best performance - though the bifocal attention model performs poorly as compared to the basic seq2seq model on French. However, the overall performance for French and German are much smaller than those for English. There could be multiple reasons for this. First, the amount of training data in these two languages is smaller than that in English. Specifically, the amount of training data available in French (German) is only INLINEFORM0 ( INLINEFORM1 )% of that available for English. Second, on average the descriptions in French and German are longer than that in English (EN: INLINEFORM2 words, FR: INLINEFORM3 words and DE: INLINEFORM4 words). Finally, a manual inspection across the three languages suggests that the English descriptions have a more consistent structure than the French descriptions. For example, most English descriptions start with name followed by date of birth but this is not the case in French. However, this is only a qualitative observation and it is hard to quantify this characteristic of the French and German datasets."
],
[
"If the proposed model indeed works well then we should see attention weights that are consistent with the stay on and never look back behavior. To verify this, we plotted the attention weights in cases where the model with gated orthogonalization does better than the model with only bifocal attention. Figure FIGREF21 shows the attention weights corresponding to infobox in Figure FIGREF25 . Notice that the model without gated orthogonalization has attention on both name field and article title while rendering the name. The model with gated orthogonalization, on the other hand, stays on the name field for as long as it is required but then moves and never returns to it (as expected).",
"Due to lack of space, we do not show similar plots for French and German but we would like to mention that, in general, the differences between the attention weights learned by the model with and without gated orthogonalization were more pronounced for the French/German dataset than the English dataset. This is in agreement with the results reported in Table TABREF20 and TABREF20 where the improvements given by gated orthogonalization are more for French/German than for English."
],
[
"What if the model sees a different INLINEFORM0 of person at test time? For example, what if the training data does not contain any sportspersons but at test time we encounter the infobox of a sportsperson. This is the same as seeing out-of-domain data at test time. Such a situation is quite expected in the products domain where new products with new features (fields) get frequently added to the catalog. We were interested in three questions here. First, we wanted to see if testing the model on out-of-domain data indeed leads to a drop in the performance. For this, we compared the performance of our best model in two scenarios (i) trained on data from all domains (including the target domain) and tested on the target domain (sports, arts) and (ii) trained on data from all domains except the target domain and tested on the target domain. Comparing rows 1 and 2 of Table TABREF32 we observed a significant drop in the performance. Note that the numbers for sports domain in row 1 are much better than the Arts domain because roughly 40% of the WikiBio training data contains sportspersons.",
"Next, we wanted to see if we can use a small amount of data from the target domain to fine tune a model trained on the out of domain data. We observe that even with very small amounts of target domain data the performance starts improving significantly (see rows 3 and 4 of Table TABREF32 ). Note that if we train a model from scratch with only limited data from the target domain instead of fine-tuning a model trained on a different source domain then the performance is very poor. In particular, training a model from scratch with 10K training instances we get a BLEU score of INLINEFORM0 and INLINEFORM1 for arts and sports respectively. Finally, even though the actual words used for describing a sportsperson (footballer, cricketer, etc.) would be very different from the words used to describe an artist (actor, musician, etc.) they might share many fields (for example, date of birth, occupation, etc.). As seen in Figure FIGREF28 (attention weights corresponding to the infobox in Figure FIGREF27 ), the model predicts the attention weights correctly for common fields (such as occupation) but it is unable to use the right vocabulary to describe the occupation (since it has not seen such words frequently in the training data). However, once we fine tune the model with limited data from the target domain we see that it picks up the new vocabulary and produces a correct description of the occupation."
],
[
"We present a model for generating natural language descriptions from structured data. To address specific characteristics of the problem we propose neural components for fused bifocal attention and gated orthogonalization to address stay on and never look back behavior while decoding. Our final model outperforms an existing state of the art model on a large scale WikiBio dataset by 21%. We also introduce datasets for French and German and demonstrate that our model gives state of the art results on these datasets. Finally, we perform experiments with an out-of-domain model and show that if such a model is fine-tuned with small amounts of in domain data then it can give an improved performance on the target domain.",
"Given the multilingual nature of the new datasets, as future work, we would like to build models which can jointly learn to generate natural language descriptions from structured data in multiple languages. One idea is to replace the concepts in the input infobox by Wikidata concept ids which are language agnostic. A large amount of input vocabulary could thus be shared across languages thereby facilitating joint learning."
],
[
"We thank Google for supporting Preksha Nema through their Google India Ph.D. Fellowship program. We also thank Microsoft Research India for supporting Shreyas Shetty through their generous travel grant for attending the conference."
]
],
"section_name": [
"Introduction",
"Related work",
"Proposed model",
"Fused Bifocal Attention Mechanism",
"Gated Orthogonalization for Modeling Stay-On and Never Look Back behaviour",
"Experimental setup",
"Datasets",
"Models compared",
"Hyperparameter tuning",
"Results and Discussions",
"Comparison of different models",
"Human Evaluations",
"Performance on different languages",
"Visualizing Attention Weights",
"Out of domain results",
"Conclusion",
"Acknowledgements"
]
} | {
"answers": [
{
"annotation_id": [
"85724d06688ad24b71bb65de3f15d0d2d347f0b5",
"c6c379ae7894bc9174af078042570f990502398d"
],
"answer": [
{
"evidence": [
"Following lebret2016neural, we used BLEU-4, NIST-4 and ROUGE-4 as the evaluation metrics. We first make a few observations based on the results on the English dataset (Table TABREF15 ). The basic seq2seq model, as well as the model proposed by weather16, perform better than the model proposed by lebret2016neural. Our final model with bifocal attention and gated orthogonalization gives the best performance and does 10% (relative) better than the closest baseline (basic seq2seq) and 21% (relative) better than the current state of the art method BIBREF0 . In Table TABREF16 , we show some qualitative examples of the output generated by different models."
],
"extractive_spans": [
"BLEU-4",
"NIST-4",
"ROUGE-4"
],
"free_form_answer": "",
"highlighted_evidence": [
"Following lebret2016neural, we used BLEU-4, NIST-4 and ROUGE-4 as the evaluation metrics."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Following lebret2016neural, we used BLEU-4, NIST-4 and ROUGE-4 as the evaluation metrics. We first make a few observations based on the results on the English dataset (Table TABREF15 ). The basic seq2seq model, as well as the model proposed by weather16, perform better than the model proposed by lebret2016neural. Our final model with bifocal attention and gated orthogonalization gives the best performance and does 10% (relative) better than the closest baseline (basic seq2seq) and 21% (relative) better than the current state of the art method BIBREF0 . In Table TABREF16 , we show some qualitative examples of the output generated by different models."
],
"extractive_spans": [
"BLEU-4",
"NIST-4",
"ROUGE-4"
],
"free_form_answer": "",
"highlighted_evidence": [
"Following lebret2016neural, we used BLEU-4, NIST-4 and ROUGE-4 as the evaluation metrics. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"ab24ad2572269f774e170acdb2f2bd38e657d0ed",
"ef8307607d79fbe37a5626563f3ad2f73196d4b1"
],
"answer": [
{
"evidence": [
"We tuned the hyperparameters of all the models using a validation set. As mentioned earlier, we used a bidirectional GRU cell as the function INLINEFORM0 for computing the representation of the fields and the values (see Section SECREF4 ). For all the models, we experimented with GRU state sizes of 128, 256 and 512. The total number of unique words in the corpus is around 400K (this includes the words in the infobox and the descriptions). Of these, we retained only the top 20K words in our vocabulary (same as BIBREF0 ). We initialized the embeddings of these words with 300 dimensional Glove embeddings BIBREF31 . We used Adam BIBREF32 with a learning rate of INLINEFORM1 , INLINEFORM2 and INLINEFORM3 . We trained the model for a maximum of 20 epochs and used early stopping with the patience set to 5 epochs."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"We initialized the embeddings of these words with 300 dimensional Glove embeddings BIBREF31 . "
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"We tuned the hyperparameters of all the models using a validation set. As mentioned earlier, we used a bidirectional GRU cell as the function INLINEFORM0 for computing the representation of the fields and the values (see Section SECREF4 ). For all the models, we experimented with GRU state sizes of 128, 256 and 512. The total number of unique words in the corpus is around 400K (this includes the words in the infobox and the descriptions). Of these, we retained only the top 20K words in our vocabulary (same as BIBREF0 ). We initialized the embeddings of these words with 300 dimensional Glove embeddings BIBREF31 . We used Adam BIBREF32 with a learning rate of INLINEFORM1 , INLINEFORM2 and INLINEFORM3 . We trained the model for a maximum of 20 epochs and used early stopping with the patience set to 5 epochs."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"We initialized the embeddings of these words with 300 dimensional Glove embeddings BIBREF31 ."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"5389fddf562a43d9499fbad61f2eaa2584e05d6b",
"74ae0c4b9033b9b4201fde547850b05ca1570bce"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 1: Comparison of different models on the English WIKIBIO dataset",
"FLOAT SELECTED: Table 4: Comparison of different models on the French WIKIBIO dataset",
"FLOAT SELECTED: Table 5: Comparison of different models on the German WIKIBIO dataset"
],
"extractive_spans": [],
"free_form_answer": "English WIKIBIO, French WIKIBIO , German WIKIBIO ",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Comparison of different models on the English WIKIBIO dataset",
"FLOAT SELECTED: Table 4: Comparison of different models on the French WIKIBIO dataset",
"FLOAT SELECTED: Table 5: Comparison of different models on the German WIKIBIO dataset"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We use the WikiBio dataset introduced by lebret2016neural. It consists of INLINEFORM0 biography articles from English Wikipedia. A biography article corresponds to a person (sportsman, politician, historical figure, actor, etc.). Each Wikipedia article has an accompanying infobox which serves as the structured input and the task is to generate the first sentence of the article (which typically is a one-line description of the person). We used the same train, valid and test sets which were made publicly available by lebret2016neural.",
"We also introduce two new biography datasets, one in French and one in German. These datasets were created and pre-processed using the same procedure as outlined in lebret2016neural. Specifically, we extracted the infoboxes and the first sentence from the corresponding Wikipedia article. As with the English dataset, we split the French and German datasets randomly into train (80%), test (10%) and valid (10%). The French and German datasets extracted by us has been made publicly available. The number of examples was 170K and 50K and the vocabulary size was 297K and 143K for French and German respectively. Although in this work we focus only on generating descriptions in one language, we hope that this dataset will also be useful for developing models which jointly learn to generate descriptions from structured data in multiple languages."
],
"extractive_spans": [
"WikiBio dataset",
" introduce two new biography datasets, one in French and one in German"
],
"free_form_answer": "",
"highlighted_evidence": [
"We use the WikiBio dataset introduced by lebret2016neural. It consists of INLINEFORM0 biography articles from English Wikipedia.",
"We also introduce two new biography datasets, one in French and one in German. These datasets were created and pre-processed using the same procedure as outlined in lebret2016neural. Specifically, we extracted the infoboxes and the first sentence from the corresponding Wikipedia article."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"60a2023767006c28173bfb17f445ed9ca30ea66d"
],
"answer": [
{
"evidence": [
"Note that the number of fields in the infobox and the ordering of the fields within the infobox varies from person to person. Given the large size (700K examples) and heterogeneous nature of the dataset which contains biographies of people from different backgrounds (sports, politics, arts, etc.), it is hard to come up with simple rule-based templates for generating natural language descriptions from infoboxes, thereby making a case for data-driven models. Based on the recent success of data-driven neural models for various other NLG tasks BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , one simple choice is to treat the infobox as a sequence of {field, value} pairs and use a standard seq2seq model for this task. However, such a model is too generic and does not exploit the specific characteristics of this task as explained below. First, note that while generating such descriptions from structured data, a human keeps track of information at two levels. Specifically, at a macro level, she would first decide which field to mention next and then at a micro level decide which of the values in the field needs to be mentioned next. For example, she first decides that at the current step, the field occupation needs attention and then decides which is the next appropriate occupation to attend to from the set of occupations (actor, director, producer, etc.). To enable this, we use a bifocal attention mechanism which computes an attention over fields at a macro level and over values at a micro level. We then fuse these attention weights such that the attention weight for a field also influences the attention over the values within it. Finally, we feed a fused context vector to the decoder which contains both field level and word level information. Note that such two-level attention mechanisms BIBREF6 , BIBREF7 , BIBREF8 have been used in the context of unstructured data (as opposed to structured data in our case), where at a macro level one needs to pay attention to sentences and at a micro level to words in the sentences.",
"Fused Bifocal Attention Mechanism",
"Intuitively, when a human writes a description from a table she keeps track of information at two levels. At the macro level, it is important to decide which is the appropriate field to attend to next and at a micro level (i.e., within a field) it is important to know which values to attend to next. To capture this behavior, we use a bifocal attention mechanism as described below.",
"Fused Attention: Intuitively, the attention weights assigned to a field should have an influence on all the values belonging to the particular field. To ensure this, we reweigh the micro level attention weights based on the corresponding macro level attention weights. In other words, we fuse the attention weights at the two levels as: DISPLAYFORM0",
"where INLINEFORM0 is the field corresponding to the INLINEFORM1 -th value, INLINEFORM2 is the macro level context vector."
],
"extractive_spans": [
"At the macro level, it is important to decide which is the appropriate field to attend to next",
"micro level (i.e., within a field) it is important to know which values to attend to next",
"fuse the attention weights at the two levels"
],
"free_form_answer": "",
"highlighted_evidence": [
"To enable this, we use a bifocal attention mechanism which computes an attention over fields at a macro level and over values at a micro level. We then fuse these attention weights such that the attention weight for a field also influences the attention over the values within it.",
"Fused Bifocal Attention Mechanism\nIntuitively, when a human writes a description from a table she keeps track of information at two levels. At the macro level, it is important to decide which is the appropriate field to attend to next and at a micro level (i.e., within a field) it is important to know which values to attend to next. To capture this behavior, we use a bifocal attention mechanism as described below.",
"Fused Attention: Intuitively, the attention weights assigned to a field should have an influence on all the values belonging to the particular field. To ensure this, we reweigh the micro level attention weights based on the corresponding macro level attention weights. In other words, we fuse the attention weights at the two levels as: DISPLAYFORM0\n\nwhere INLINEFORM0 is the field corresponding to the INLINEFORM1 -th value, INLINEFORM2 is the macro level context vector."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"What metrics are used for evaluation?",
"Do they use pretrained embeddings?",
"What dataset is used?",
"What is a bifocal attention mechanism?"
],
"question_id": [
"c2e475adeddcdc4d637ef0d4f5065b6a9b299827",
"cb6a8c642575d3577d1840ca2f4cd2cc2c3397c5",
"6cd25c637c6b772ce29e8ee81571e8694549c5ab",
"1088255980541382a2aa2c0319427702172bbf84"
],
"question_writer": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: Sample Infobox with description : V. Balakrishnan (born 1943 as Venkataraman Balakrishnan) is an Indian theoretical physicist who has worked in a number of fields of areas, including particle physics, many-body theory, the mechanical behavior of solids, dynamical systems, stochastic processes, and quantum dynamics.",
"Figure 2: Proposed model",
"Table 1: Comparison of different models on the English WIKIBIO dataset",
"Table 3: Qualitative Comparison of Model A (Seq2Seq) and Model B (our model)",
"Table 4: Comparison of different models on the French WIKIBIO dataset",
"Table 5: Comparison of different models on the German WIKIBIO dataset",
"Table 2: Examples of generated descriptions from different models. For the last two examples, name generated by Basic Seq2Seq model is incorrect because it attended to preceded by field.",
"Figure 3: Comparison of the attention weights and descriptions produced for Infobox in Figure 4",
"Table 6: Out of domain results(BLEU-4)",
"Figure 4: Wikipedia Infobox for Samuel Smiles",
"Figure 5: Wikipedia Infobox for Mark Tobey",
"Figure 6: Comparison of the attention weights and descriptions (see highlighted boxes) produced by an out-of-domain model with and without fine tuning for the Infobox in Figure 5"
],
"file": [
"2-Figure1-1.png",
"4-Figure2-1.png",
"6-Table1-1.png",
"7-Table3-1.png",
"7-Table4-1.png",
"7-Table5-1.png",
"7-Table2-1.png",
"8-Figure3-1.png",
"8-Table6-1.png",
"8-Figure4-1.png",
"8-Figure5-1.png",
"9-Figure6-1.png"
]
} | [
"What dataset is used?"
] | [
[
"1804.07789-Datasets-1",
"1804.07789-6-Table1-1.png",
"1804.07789-7-Table5-1.png",
"1804.07789-Datasets-0",
"1804.07789-7-Table4-1.png"
]
] | [
"English WIKIBIO, French WIKIBIO , German WIKIBIO "
] | 168 |
1905.11268 | Combating Adversarial Misspellings with Robust Word Recognition | To combat adversarial spelling mistakes, we propose placing a word recognition model in front of the downstream classifier. Our word recognition models build upon the RNN semicharacter architecture, introducing several new backoff strategies for handling rare and unseen words. Trained to recognize words corrupted by random adds, drops, swaps, and keyboard mistakes, our method achieves 32% relative (and 3.3% absolute) error reduction over the vanilla semi-character model. Notably, our pipeline confers robustness on the downstream classifier, outperforming both adversarial training and off-the-shelf spell checkers. Against a BERT model fine-tuned for sentiment analysis, a single adversarially-chosen character attack lowers accuracy from 90.3% to 45.8%. Our defense restores accuracy to 75%
1 . Surprisingly, better word recognition does not always entail greater robustness. Our analysis reveals that robustness also depends upon a quantity that we denote the sensitivity. | {
"paragraphs": [
[
"Despite the rapid progress of deep learning techniques on diverse supervised learning tasks, these models remain brittle to subtle shifts in the data distribution. Even when the permissible changes are confined to barely-perceptible perturbations, training robust models remains an open challenge. Following the discovery that imperceptible attacks could cause image recognition models to misclassify examples BIBREF0 , a veritable sub-field has emerged in which authors iteratively propose attacks and countermeasures.",
"For all the interest in adversarial computer vision, these attacks are rarely encountered outside of academic research. However, adversarial misspellings constitute a longstanding real-world problem. Spammers continually bombard email servers, subtly misspelling words in efforts to evade spam detection while preserving the emails' intended meaning BIBREF1 , BIBREF2 . As another example, programmatic censorship on the Internet has spurred communities to adopt similar methods to communicate surreptitiously BIBREF3 .",
"In this paper, we focus on adversarially-chosen spelling mistakes in the context of text classification, addressing the following attack types: dropping, adding, and swapping internal characters within words. These perturbations are inspired by psycholinguistic studies BIBREF4 , BIBREF5 which demonstrated that humans can comprehend text altered by jumbling internal characters, provided that the first and last characters of each word remain unperturbed.",
"First, in experiments addressing both BiLSTM and fine-tuned BERT models, comprising four different input formats: word-only, char-only, word+char, and word-piece BIBREF6 , we demonstrate that an adversary can degrade a classifier's performance to that achieved by random guessing. This requires altering just two characters per sentence. Such modifications might flip words either to a different word in the vocabulary or, more often, to the out-of-vocabulary token UNK. Consequently, adversarial edits can degrade a word-level model by transforming the informative words to UNK. Intuitively, one might suspect that word-piece and character-level models would be less susceptible to spelling attacks as they can make use of the residual word context. However, our experiments demonstrate that character and word-piece models are in fact more vulnerable. We show that this is due to the adversary's effective capacity for finer grained manipulations on these models. While against a word-level model, the adversary is mostly limited to UNK-ing words, against a word-piece or character-level model, each character-level add, drop, or swap produces a distinct input, providing the adversary with a greater set of options.",
"Second, we evaluate first-line techniques including data augmentation and adversarial training, demonstrating that they offer only marginal benefits here, e.g., a BERT model achieving $90.3$ accuracy on a sentiment classification task, is degraded to $64.1$ by an adversarially-chosen 1-character swap in the sentence, which can only be restored to $69.2$ by adversarial training.",
"Third (our primary contribution), we propose a task-agnostic defense, attaching a word recognition model that predicts each word in a sentence given a full sequence of (possibly misspelled) inputs. The word recognition model's outputs form the input to a downstream classification model. Our word recognition models build upon the RNN-based semi-character word recognition model due to BIBREF7 . While our word recognizers are trained on domain-specific text from the task at hand, they often predict UNK at test time, owing to the small domain-specific vocabulary. To handle unobserved and rare words, we propose several backoff strategies including falling back on a generic word recognizer trained on a larger corpus. Incorporating our defenses, BERT models subject to 1-character attacks are restored to $88.3$ , $81.1$ , $78.0$ accuracy for swap, drop, add attacks respectively, as compared to $69.2$ , $63.6$ , and $50.0$ for adversarial training",
"Fourth, we offer a detailed qualitative analysis, demonstrating that a low word error rate alone is insufficient for a word recognizer to confer robustness on the downstream task. Additionally, we find that it is important that the recognition model supply few degrees of freedom to an attacker. We provide a metric to quantify this notion of sensitivity in word recognition models and study its relation to robustness empirically. Models with low sensitivity and word error rate are most robust."
],
[
"Several papers address adversarial attacks on NLP systems. Changes to text, whether word- or character-level, are all perceptible, raising some questions about what should rightly be considered an adversarial example BIBREF8 , BIBREF9 . BIBREF10 address the reading comprehension task, showing that by appending distractor sentences to the end of stories from the SQuAD dataset BIBREF11 , they could cause models to output incorrect answers. Inspired by this work, BIBREF12 demonstrate an attack that breaks entailment systems by replacing a single word with either a synonym or its hypernym. Recently, BIBREF13 investigated the problem of producing natural-seeming adversarial examples, noting that adversarial examples in NLP are often ungrammatical BIBREF14 .",
"In related work on character-level attacks, BIBREF8 , BIBREF15 explored gradient-based methods to generate string edits to fool classification and translation systems, respectively. While their focus is on efficient methods for generating adversaries, ours is on improving the worst case adversarial performance. Similarly, BIBREF9 studied how synthetic and natural noise affects character-level machine translation. They considered structure invariant representations and adversarial training as defenses against such noise. Here, we show that an auxiliary word recognition model, which can be trained on unlabeled data, provides a strong defense.",
"Spelling correction BIBREF16 is often viewed as a sub-task of grammatical error correction BIBREF17 , BIBREF18 . Classic methods rely on a source language model and a noisy channel model to find the most likely correction for a given word BIBREF19 , BIBREF20 . Recently, neural techniques have been applied to the task BIBREF7 , BIBREF21 , which model the context and orthography of the input together. Our work extends the ScRNN model of BIBREF7 ."
],
[
"To tackle character-level adversarial attacks, we introduce a simple two-stage solution, placing a word recognition model ( $W$ ) before the downstream classifier ( $C$ ). Under this scheme, all inputs are classified by the composed model $C \\circ W$ . This modular approach, with $W$ and $C$ trained separately, offers several benefits: (i) we can deploy the same word recognition model for multiple downstream classification tasks/models; and (ii) we can train the word recognition model with larger unlabeled corpora.",
"Against adversarial mistakes, two important factors govern the robustness of this combined model: $W$ 's accuracy in recognizing misspelled words and $W$ 's sensitivity to adversarial perturbations on the same input. We discuss these aspects in detail below."
],
[
"We now describe semi-character RNNs for word recognition, explain their limitations, and suggest techniques to improve them.",
"Inspired by the psycholinguistic studies BIBREF5 , BIBREF4 , BIBREF7 proposed a semi-character based RNN (ScRNN) that processes a sentence of words with misspelled characters, predicting the correct words at each step. Let $s = \\lbrace w_1, w_2, \\dots , w_n\\rbrace $ denote the input sentence, a sequence of constituent words $w_i$ . Each input word ( $w_i$ ) is represented by concatenating (i) a one hot vector of the first character ( $\\mathbf {w_{i1}}$ ); (ii) a one hot representation of the last character ( $\\mathbf {w_{il}}$ , where $l$ is the length of word $w_i$ ); and (iii) a bag of characters representation of the internal characters ( $\\sum _{j=2}^{l-1}\\mathbf {w_{ij}})$ . ScRNN treats the first and the last characters individually, and is agnostic to the ordering of the internal characters. Each word, represented accordingly, is then fed into a BiLSTM cell. At each sequence step, the training target is the correct corresponding word (output dimension equal to vocabulary size), and the model is optimized with cross-entropy loss.",
"While BIBREF7 demonstrate strong word recognition performance, a drawback of their evaluation setup is that they only attack and evaluate on the subset of words that are a part of their training vocabulary. In such a setting, the word recognition performance is unreasonably dependent on the chosen vocabulary size. In principle, one can design models to predict (correctly) only a few chosen words, and ignore the remaining majority and still reach 100% accuracy. For the adversarial setting, rare and unseen words in the wild are particularly critical, as they provide opportunities for the attackers. A reliable word-recognizer should handle these cases gracefully. Below, we explore different ways to back off when the ScRNN predicts UNK (a frequent outcome for rare and unseen words):",
"Pass-through: word-recognizer passes on the (possibly misspelled) word as is.",
"Backoff to neutral word: Alternatively, noting that passing $\\colorbox {gray!20}{\\texttt {UNK}}$ -predicted words through unchanged exposes the downstream model to potentially corrupted text, we consider backing off to a neutral word like `a', which has a similar distribution across classes.",
"Backoff to background model: We also consider falling back upon a more generic word recognition model trained upon a larger, less-specialized corpus whenever the foreground word recognition model predicts UNK. Figure 1 depicts this scenario pictorially.",
"Empirically, we find that the background model (by itself) is less accurate, because of the large number of words it is trained to predict. Thus, it is best to train a precise foreground model on an in-domain corpus and focus on frequent words, and then to resort to a general-purpose background model for rare and unobserved words. Next, we delineate our second consideration for building robust word-recognizers."
],
[
"In computer vision, an important factor determining the success of an adversary is the norm constraint on the perturbations allowed to an image ( $|| \\bf x - \\bf x^{\\prime }||_{\\infty } < \\epsilon $ ). Higher values of $\\epsilon $ lead to a higher chance of mis-classification for at least one $\\bf x^{\\prime }$ . Defense methods such as quantization BIBREF22 and thermometer encoding BIBREF23 try to reduce the space of perturbations available to the adversary by making the model invariant to small changes in the input.",
"In NLP, we often get such invariance for free, e.g., for a word-level model, most of the perturbations produced by our character-level adversary lead to an UNK at its input. If the model is robust to the presence of these UNK tokens, there is little room for an adversary to manipulate it. Character-level models, on the other hand, despite their superior performance in many tasks, do not enjoy such invariance. This characteristic invariance could be exploited by an attacker. Thus, to limit the number of different inputs to the classifier, we wish to reduce the number of distinct word recognition outputs that an attacker can induce, not just the number of words on which the model is “fooled”. We denote this property of a model as its sensitivity.",
"We can quantify this notion for a word recognition system $W$ as the expected number of unique outputs it assigns to a set of adversarial perturbations. Given a sentence $s$ from the set of sentences $\\mathcal {S}$ , let $A(s) = {s_1}^{\\prime } , {s_2}^{\\prime }, \\dots , {s_n}^{\\prime }$ denote the set of $n$ perturbations to it under attack type $A$ , and let $V$ be the function that maps strings to an input representation for the downstream classifier. For a word level model, $V$ would transform sentences to a sequence of word ids, mapping OOV words to the same UNK ID. Whereas, for a char (or word+char, word-piece) model, $V$ would map inputs to a sequence of character IDs. Formally, sensitivity is defined as ",
"$$S_{W,V}^A=\\mathbb {E}_{s}\\left[\\frac{\\#_{u}(V \\circ W({s_1}^{\\prime }), \\dots , V \\circ W({s_n}^{\\prime }))}{n}\\right] ,$$ (Eq. 12) ",
"where $V \\circ W (s_i)$ returns the input representation (of the downstream classifier) for the output string produced by the word-recognizer $W$ using $s_i$ and $\\#_{u}(\\cdot )$ counts the number of unique arguments.",
"Intuitively, we expect a high value of $S_{W, V}^A$ to lead to a lower robustness of the downstream classifier, since the adversary has more degrees of freedom to attack the classifier. Thus, when using word recognition as a defense, it is prudent to design a low sensitivity system with a low error rate. However, as we will demonstrate, there is often a trade-off between sensitivity and error rate."
],
[
"Suppose we are given a classifier $C: \\mathcal {S} \\rightarrow \\mathcal {Y}$ which maps natural language sentences $s \\in \\mathcal {S}$ to a label from a predefined set $y \\in \\mathcal {Y}$ . An adversary for this classifier is a function $A$ which maps a sentence $s$ to its perturbed versions $\\lbrace s^{\\prime }_1, s^{\\prime }_2, \\ldots , s^{\\prime }_{n}\\rbrace $ such that each $s^{\\prime }_i$ is close to $s$ under some notion of distance between sentences. We define the robustness of classifier $C$ to the adversary $A$ as: ",
"$$R_{C,A} = \\mathbb {E}_s \\left[\\min _{s^{\\prime } \\in A(s)} \\mathbb {1}[C(s^{\\prime }) = y]\\right],$$ (Eq. 14) ",
"where $y$ represents the ground truth label for $s$ . In practice, a real-world adversary may only be able to query the classifier a few times, hence $R_{C,A}$ represents the worst-case adversarial performance of $C$ . Methods for generating adversarial examples, such as HotFlip BIBREF8 , focus on efficient algorithms for searching the $\\min $ above. Improving $R_{C,A}$ would imply better robustness against all these methods.",
"We explore adversaries which perturb sentences with four types of character-level edits:",
"(1) Swap: swapping two adjacent internal characters of a word. (2) Drop: removing an internal character of a word. (3) Keyboard: substituting an internal character with adjacent characters of QWERTY keyboard (4) Add: inserting a new character internally in a word. In line with the psycholinguistic studies BIBREF5 , BIBREF4 , to ensure that the perturbations do not affect human ability to comprehend the sentence, we only allow the adversary to edit the internal characters of a word, and not edit stopwords or words shorter than 4 characters.",
"For 1-character attacks, we try all possible perturbations listed above until we find an adversary that flips the model prediction. For 2-character attacks, we greedily fix the edit which had the least confidence among 1-character attacks, and then try all the allowed perturbations on the remaining words. Higher order attacks can be performed in a similar manner. The greedy strategy reduces the computation required to obtain higher order attacks, but also means that the robustness score is an upper bound on the true robustness of the classifier."
],
[
"In this section, we first discuss our experiments on the word recognition systems."
],
[
"Data: We evaluate the spell correctors from § \"Robust Word Recognition\" on movie reviews from the Stanford Sentiment Treebank (SST) BIBREF24 . The SST dataset consists of 8544 movie reviews, with a vocabulary of over 16K words. As a background corpus, we use the IMDB movie reviews BIBREF25 , which contain 54K movie reviews, and a vocabulary of over 78K words. The two datasets do not share any reviews in common. The spell-correction models are evaluated on their ability to correct misspellings. The test setting consists of reviews where each word (with length $\\ge 4$ , barring stopwords) is attacked by one of the attack types (from swap, add, drop and keyboard attacks). In the all attack setting, we mix all attacks by randomly choosing one for each word. This most closely resembles a real world attack setting.",
"In addition to our word recognition models, we also compare to After The Deadline (ATD), an open-source spell corrector. We found ATD to be the best freely-available corrector. We refer the reader to BIBREF7 for comparisons of ScRNN to other anonymized commercial spell checkers.",
"For the ScRNN model, we use a single-layer Bi-LSTM with a hidden dimension size of 50. The input representation consists of 198 dimensions, which is thrice the number of unique characters (66) in the vocabulary. We cap the vocabulary size to 10K words, whereas we use the entire vocabulary of 78470 words when we backoff to the background model. For training these networks, we corrupt the movie reviews according to all attack types, i.e., applying one of the 4 attack types to each word, and trying to reconstruct the original words via cross entropy loss.",
"We calculate the word error rates (WER) of each of the models for different attacks and present our findings in Table 2 . Note that ATD incorrectly predicts $11.2$ words for every 100 words (in the `all' setting), whereas, all of the backoff variations of the ScRNN reconstruct better. The most accurate variant involves backing off to the background model, resulting in a low error rate of $6.9\\%$ , leading to the best performance on word recognition. This is a $32\\%$ relative error reduction compared to the vanilla ScRNN model with a pass-through backoff strategy. We can attribute the improved performance to the fact that there are $5.25\\%$ words in the test corpus that are unseen in the training corpus, and are thus only recoverable by backing off to a larger corpus. Notably, only training on the larger background corpus does worse, at $8.7\\%$ , since the distribution of word frequencies is different in the background corpus compared to the foreground corpus."
],
[
"We use sentiment analysis and paraphrase detection as downstream tasks, as for these two tasks, 1-2 character edits do not change the output labels.",
"For sentiment classification, we systematically study the effect of character-level adversarial attacks on two architectures and four different input formats. The first architecture encodes the input sentence into a sequence of embeddings, which are then sequentially processed by a BiLSTM. The first and last states of the BiLSTM are then used by the softmax layer to predict the sentiment of the input. We consider three input formats for this architecture: (1) Word-only: where the input words are encoded using a lookup table; (2) Char-only: where the input words are encoded using a separate single-layered BiLSTM over their characters; and (3) Word $+$ Char: where the input words are encoded using a concatenation of (1) and (2) .",
"The second architecture uses the fine-tuned BERT model BIBREF26 , with an input format of word-piece tokenization. This model has recently set a new state-of-the-art on several NLP benchmarks, including the sentiment analysis task we consider here. All models are trained and evaluated on the binary version of the sentence-level Stanford Sentiment Treebank BIBREF24 dataset with only positive and negative reviews.",
"We also consider the task of paraphrase detection. Here too, we make use of the fine-tuned BERT BIBREF26 , which is trained and evaluated on the Microsoft Research Paraphrase Corpus (MRPC) BIBREF27 .",
"Two common methods for dealing with adversarial examples include: (1) data augmentation (DA) BIBREF28 ; and (2) adversarial training (Adv) BIBREF29 . In DA, the trained model is fine-tuned after augmenting the training set with an equal number of examples randomly attacked with a 1-character edit. In Adv, the trained model is fine-tuned with additional adversarial examples (selected at random) that produce incorrect predictions from the current-state classifier. The process is repeated iteratively, generating and adding newer adversarial examples from the updated classifier model, until the adversarial accuracy on dev set stops improving.",
"In Table 3 , we examine the robustness of the sentiment models under each attack and defense method. In the absence of any attack or defense, BERT (a word-piece model) performs the best ( $90.3\\%$ ) followed by word+char models ( $80.5\\%$ ), word-only models ( $79.2\\%$ ) and then char-only models ( $70.3\\%$ ). However, even single-character attacks (chosen adversarially) can be catastrophic, resulting in a significantly degraded performance of $46\\%$ , $57\\%$ , $59\\%$ and $33\\%$ , respectively under the `all' setting.",
"Intuitively, one might suppose that word-piece and character-level models would be more robust to such attacks given they can make use of the remaining context. However, we find that they are the more susceptible. To see why, note that the word `beautiful' can only be altered in a few ways for word-only models, either leading to an UNK or an existing vocabulary word, whereas, word-piece and character-only models treat each unique character combination differently. This provides more variations that an attacker can exploit. Following similar reasoning, add and key attacks pose a greater threat than swap and drop attacks. The robustness of different models can be ordered as word-only $>$ word+char $>$ char-only $\\sim $ word-piece, and the efficacy of different attacks as add $>$ key $>$ drop $>$ swap.",
"Next, we scrutinize the effectiveness of defense methods when faced against adversarially chosen attacks. Clearly from table 3 , DA and Adv are not effective in this case. We observed that despite a low training error, these models were not able to generalize to attacks on newer words at test time. ATD spell corrector is the most effective on keyboard attacks, but performs poorly on other attack types, particularly the add attack strategy.",
"The ScRNN model with pass-through backoff offers better protection, bringing back the adversarial accuracy within $5\\%$ range for the swap attack. It is also effective under other attack classes, and can mitigate the adversarial effect in word-piece models by $21\\%$ , character-only models by $19\\%$ , and in word, and word+char models by over $4.5\\%$ . This suggests that the direct training signal of word error correction is more effective than the indirect signal of sentiment classification available to DA and Adv for model robustness.",
"We observe additional gains by using background models as a backoff alternative, because of its lower word error rate (WER), especially, under the swap and drop attacks. However, these gains do not consistently translate in all other settings, as lower WER is necessary but not sufficient. Besides lower error rate, we find that a solid defense should furnish the attacker the fewest options to attack, i.e. it should have a low sensitivity.",
"As we shall see in section § \"Understanding Model Sensitivity\" , the backoff neutral variation has the lowest sensitivity due to mapping UNK predictions to a fixed neutral word. Thus, it results in the highest robustness on most of the attack types for all four model classes.",
"Table 4 shows the accuracy of BERT on 200 examples from the dev set of the MRPC paraphrase detection task under various attack and defense settings. We re-trained the ScRNN model variants on the MRPC training set for these experiments. Again, we find that simple 1-2 character attacks can bring down the accuracy of BERT significantly ( $89\\%$ to $31\\%$ ). Word recognition models can provide an effective defense, with both our pass-through and neutral variants recovering most of the accuracy. While the neutral backoff model is effective on 2-char attacks, it hurts performance in the no attack setting, since it incorrectly modifies certain correctly spelled entity names. Since the two variants are already effective, we did not train a background model for this task."
],
[
"To study model sensitivity, for each sentence, we perturb one randomly-chosen word and replace it with all possible perturbations under a given attack type. The resulting set of perturbed sentences is then fed to the word recognizer (whose sensitivity is to be estimated). As described in equation 12 , we count the number of unique predictions from the output sentences. Two corrections are considered unique if they are mapped differently by the downstream classifier.",
"The neutral backoff variant has the lowest sensitivity (Table 5 ). This is expected, as it returns a fixed neutral word whenever the ScRNN predicts an UNK, therefore reducing the number of unique outputs it predicts. Open vocabulary (i.e. char-only, word+char, word-piece) downstream classifiers consider every unique combination of characters differently, whereas word-only classifiers internally treat all out of vocabulary (OOV) words alike. Hence, for char-only, word+char, and word-piece models, the pass-through version is more sensitive than the background variant, as it passes words as is (and each combination is considered uniquely). However, for word-only models, pass-through is less sensitive as all the OOV character combinations are rendered identical.",
"Ideally, a preferred defense is one with low sensitivity and word error rate. In practice, however, we see that a low error rate often comes at the cost of sensitivity. We see this trade-off in Figure 2 , where we plot WER and sensitivity on the two axes, and depict the robustness when using different backoff variants. Generally, sensitivity is the more dominant factor out of the two, as the error rates of the considered variants are reasonably low.",
"We verify if the sentiment (of the reviews) is preserved with char-level attacks. In a human study with 50 attacked (and subsequently misclassified), and 50 unchanged reviews, it was noted that 48 and 49, respectively, preserved the sentiment."
],
[
"As character and word-piece inputs become commonplace in modern NLP pipelines, it is worth highlighting the vulnerability they add. We show that minimally-doctored attacks can bring down accuracy of classifiers to random guessing. We recommend word recognition as a safeguard against this and build upon RNN-based semi-character word recognizers. We discover that when used as a defense mechanism, the most accurate word recognition models are not always the most robust against adversarial attacks. Additionally, we highlight the need to control the sensitivity of these models to achieve high robustness."
],
[
"The authors are grateful to Graham Neubig, Eduard Hovy, Paul Michel, Mansi Gupta, and Antonios Anastasopoulos for suggestions and feedback."
]
],
"section_name": [
"Introduction",
"Related Work",
"Robust Word Recognition",
"ScRNN with Backoff",
"Model Sensitivity",
"Synthesizing Adversarial Attacks",
"Experiments and Results",
"Word Error Correction",
"Robustness to adversarial attacks",
"Understanding Model Sensitivity",
"Conclusion",
"Acknowledgements"
]
} | {
"answers": [
{
"annotation_id": [
"15ddd827360695f7846ceff0ee800b9d874d0538",
"435b631f0001210641a454d504d67e36ba058568",
"ff3f7fa487bdfae25185c1d3c5deb91bba515715"
],
"answer": [
{
"evidence": [
"In NLP, we often get such invariance for free, e.g., for a word-level model, most of the perturbations produced by our character-level adversary lead to an UNK at its input. If the model is robust to the presence of these UNK tokens, there is little room for an adversary to manipulate it. Character-level models, on the other hand, despite their superior performance in many tasks, do not enjoy such invariance. This characteristic invariance could be exploited by an attacker. Thus, to limit the number of different inputs to the classifier, we wish to reduce the number of distinct word recognition outputs that an attacker can induce, not just the number of words on which the model is “fooled”. We denote this property of a model as its sensitivity."
],
"extractive_spans": [
"the number of distinct word recognition outputs that an attacker can induce"
],
"free_form_answer": "",
"highlighted_evidence": [
"Thus, to limit the number of different inputs to the classifier, we wish to reduce the number of distinct word recognition outputs that an attacker can induce, not just the number of words on which the model is “fooled”. We denote this property of a model as its sensitivity."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In NLP, we often get such invariance for free, e.g., for a word-level model, most of the perturbations produced by our character-level adversary lead to an UNK at its input. If the model is robust to the presence of these UNK tokens, there is little room for an adversary to manipulate it. Character-level models, on the other hand, despite their superior performance in many tasks, do not enjoy such invariance. This characteristic invariance could be exploited by an attacker. Thus, to limit the number of different inputs to the classifier, we wish to reduce the number of distinct word recognition outputs that an attacker can induce, not just the number of words on which the model is “fooled”. We denote this property of a model as its sensitivity.",
"We can quantify this notion for a word recognition system $W$ as the expected number of unique outputs it assigns to a set of adversarial perturbations. Given a sentence $s$ from the set of sentences $\\mathcal {S}$ , let $A(s) = {s_1}^{\\prime } , {s_2}^{\\prime }, \\dots , {s_n}^{\\prime }$ denote the set of $n$ perturbations to it under attack type $A$ , and let $V$ be the function that maps strings to an input representation for the downstream classifier. For a word level model, $V$ would transform sentences to a sequence of word ids, mapping OOV words to the same UNK ID. Whereas, for a char (or word+char, word-piece) model, $V$ would map inputs to a sequence of character IDs. Formally, sensitivity is defined as"
],
"extractive_spans": [],
"free_form_answer": "The expected number of unique outputs a word recognition system assigns to a set of adversarial perturbations ",
"highlighted_evidence": [
"We denote this property of a model as its sensitivity.\n\nWe can quantify this notion for a word recognition system $W$ as the expected number of unique outputs it assigns to a set of adversarial perturbations."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In NLP, we often get such invariance for free, e.g., for a word-level model, most of the perturbations produced by our character-level adversary lead to an UNK at its input. If the model is robust to the presence of these UNK tokens, there is little room for an adversary to manipulate it. Character-level models, on the other hand, despite their superior performance in many tasks, do not enjoy such invariance. This characteristic invariance could be exploited by an attacker. Thus, to limit the number of different inputs to the classifier, we wish to reduce the number of distinct word recognition outputs that an attacker can induce, not just the number of words on which the model is “fooled”. We denote this property of a model as its sensitivity."
],
"extractive_spans": [
"the number of distinct word recognition outputs that an attacker can induce, not just the number of words on which the model is “fooled”"
],
"free_form_answer": "",
"highlighted_evidence": [
"Thus, to limit the number of different inputs to the classifier, we wish to reduce the number of distinct word recognition outputs that an attacker can induce, not just the number of words on which the model is “fooled”. We denote this property of a model as its sensitivity."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5d0eb97e8e840e171f73b7642c2c89dd3984157b",
"c7d4a630661cd719ea504dba56393f78278b296b",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"051e472956de9c3f0fc4ee775e95c5d66bda0d05"
],
"answer": [
{
"evidence": [
"For sentiment classification, we systematically study the effect of character-level adversarial attacks on two architectures and four different input formats. The first architecture encodes the input sentence into a sequence of embeddings, which are then sequentially processed by a BiLSTM. The first and last states of the BiLSTM are then used by the softmax layer to predict the sentiment of the input. We consider three input formats for this architecture: (1) Word-only: where the input words are encoded using a lookup table; (2) Char-only: where the input words are encoded using a separate single-layered BiLSTM over their characters; and (3) Word $+$ Char: where the input words are encoded using a concatenation of (1) and (2) .",
"We also consider the task of paraphrase detection. Here too, we make use of the fine-tuned BERT BIBREF26 , which is trained and evaluated on the Microsoft Research Paraphrase Corpus (MRPC) BIBREF27 ."
],
"extractive_spans": [],
"free_form_answer": "Sentiment analysis and paraphrase detection under adversarial attacks",
"highlighted_evidence": [
"For sentiment classification, we systematically study the effect of character-level adversarial attacks on two architectures and four different input formats. ",
"We also consider the task of paraphrase detection."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c7d4a630661cd719ea504dba56393f78278b296b"
]
},
{
"annotation_id": [
"651d1a88bfbe9b4ae67f475766bf0695324c5c35",
"9a812c2385822cf7d1d38a66a2271d932de79cb8"
],
"answer": [
{
"evidence": [
"Inspired by the psycholinguistic studies BIBREF5 , BIBREF4 , BIBREF7 proposed a semi-character based RNN (ScRNN) that processes a sentence of words with misspelled characters, predicting the correct words at each step. Let $s = \\lbrace w_1, w_2, \\dots , w_n\\rbrace $ denote the input sentence, a sequence of constituent words $w_i$ . Each input word ( $w_i$ ) is represented by concatenating (i) a one hot vector of the first character ( $\\mathbf {w_{i1}}$ ); (ii) a one hot representation of the last character ( $\\mathbf {w_{il}}$ , where $l$ is the length of word $w_i$ ); and (iii) a bag of characters representation of the internal characters ( $\\sum _{j=2}^{l-1}\\mathbf {w_{ij}})$ . ScRNN treats the first and the last characters individually, and is agnostic to the ordering of the internal characters. Each word, represented accordingly, is then fed into a BiLSTM cell. At each sequence step, the training target is the correct corresponding word (output dimension equal to vocabulary size), and the model is optimized with cross-entropy loss."
],
"extractive_spans": [],
"free_form_answer": "A semi-character based RNN (ScRNN) treats the first and last characters individually, and is agnostic to the ordering of the internal characters",
"highlighted_evidence": [
"Inspired by the psycholinguistic studies BIBREF5 , BIBREF4 , BIBREF7 proposed a semi-character based RNN (ScRNN) that processes a sentence of words with misspelled characters, predicting the correct words at each step. Let $s = \\lbrace w_1, w_2, \\dots , w_n\\rbrace $ denote the input sentence, a sequence of constituent words $w_i$ . Each input word ( $w_i$ ) is represented by concatenating (i) a one hot vector of the first character ( $\\mathbf {w_{i1}}$ ); (ii) a one hot representation of the last character ( $\\mathbf {w_{il}}$ , where $l$ is the length of word $w_i$ ); and (iii) a bag of characters representation of the internal characters ( $\\sum _{j=2}^{l-1}\\mathbf {w_{ij}})$ . ScRNN treats the first and the last characters individually, and is agnostic to the ordering of the internal characters. Each word, represented accordingly, is then fed into a BiLSTM cell. At each sequence step, the training target is the correct corresponding word (output dimension equal to vocabulary size), and the model is optimized with cross-entropy loss."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Third (our primary contribution), we propose a task-agnostic defense, attaching a word recognition model that predicts each word in a sentence given a full sequence of (possibly misspelled) inputs. The word recognition model's outputs form the input to a downstream classification model. Our word recognition models build upon the RNN-based semi-character word recognition model due to BIBREF7 . While our word recognizers are trained on domain-specific text from the task at hand, they often predict UNK at test time, owing to the small domain-specific vocabulary. To handle unobserved and rare words, we propose several backoff strategies including falling back on a generic word recognizer trained on a larger corpus. Incorporating our defenses, BERT models subject to 1-character attacks are restored to $88.3$ , $81.1$ , $78.0$ accuracy for swap, drop, add attacks respectively, as compared to $69.2$ , $63.6$ , and $50.0$ for adversarial training",
"Inspired by the psycholinguistic studies BIBREF5 , BIBREF4 , BIBREF7 proposed a semi-character based RNN (ScRNN) that processes a sentence of words with misspelled characters, predicting the correct words at each step. Let $s = \\lbrace w_1, w_2, \\dots , w_n\\rbrace $ denote the input sentence, a sequence of constituent words $w_i$ . Each input word ( $w_i$ ) is represented by concatenating (i) a one hot vector of the first character ( $\\mathbf {w_{i1}}$ ); (ii) a one hot representation of the last character ( $\\mathbf {w_{il}}$ , where $l$ is the length of word $w_i$ ); and (iii) a bag of characters representation of the internal characters ( $\\sum _{j=2}^{l-1}\\mathbf {w_{ij}})$ . ScRNN treats the first and the last characters individually, and is agnostic to the ordering of the internal characters. Each word, represented accordingly, is then fed into a BiLSTM cell. At each sequence step, the training target is the correct corresponding word (output dimension equal to vocabulary size), and the model is optimized with cross-entropy loss."
],
"extractive_spans": [
"processes a sentence of words with misspelled characters, predicting the correct words at each step"
],
"free_form_answer": "",
"highlighted_evidence": [
"Our word recognition models build upon the RNN-based semi-character word recognition model due to BIBREF7 .",
"Inspired by the psycholinguistic studies BIBREF5 , BIBREF4 , BIBREF7 proposed a semi-character based RNN (ScRNN) that processes a sentence of words with misspelled characters, predicting the correct words at each step."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c7d4a630661cd719ea504dba56393f78278b296b",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"d04940d8a2d81a44b32ff6bd0872d138dedcd77c"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"c7d4a630661cd719ea504dba56393f78278b296b"
]
},
{
"annotation_id": [
"76dc08d74d886e2608305de45913077f16620a08"
],
"answer": [
{
"evidence": [
"For all the interest in adversarial computer vision, these attacks are rarely encountered outside of academic research. However, adversarial misspellings constitute a longstanding real-world problem. Spammers continually bombard email servers, subtly misspelling words in efforts to evade spam detection while preserving the emails' intended meaning BIBREF1 , BIBREF2 . As another example, programmatic censorship on the Internet has spurred communities to adopt similar methods to communicate surreptitiously BIBREF3 ."
],
"extractive_spans": [],
"free_form_answer": "Adversarial misspellings are a real-world problem",
"highlighted_evidence": [
"However, adversarial misspellings constitute a longstanding real-world problem. Spammers continually bombard email servers, subtly misspelling words in efforts to evade spam detection while preserving the emails' intended meaning BIBREF1 , BIBREF2 ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c7d4a630661cd719ea504dba56393f78278b296b"
]
},
{
"annotation_id": [
"e9f12ab4ced667d482b8b8cb4a3a9e98f2c42b25"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"c7d4a630661cd719ea504dba56393f78278b296b"
]
},
{
"annotation_id": [
"6b8bfc71d0cc8b871f78fbbfa807d626ddb10b33",
"80104d1334e2651bd32ad3e9a568b8c459359027",
"cb380adfc6c3f628bd361108352fe8fe82fb0894"
],
"answer": [
{
"evidence": [
"While BIBREF7 demonstrate strong word recognition performance, a drawback of their evaluation setup is that they only attack and evaluate on the subset of words that are a part of their training vocabulary. In such a setting, the word recognition performance is unreasonably dependent on the chosen vocabulary size. In principle, one can design models to predict (correctly) only a few chosen words, and ignore the remaining majority and still reach 100% accuracy. For the adversarial setting, rare and unseen words in the wild are particularly critical, as they provide opportunities for the attackers. A reliable word-recognizer should handle these cases gracefully. Below, we explore different ways to back off when the ScRNN predicts UNK (a frequent outcome for rare and unseen words):",
"Pass-through: word-recognizer passes on the (possibly misspelled) word as is.",
"Backoff to neutral word: Alternatively, noting that passing $\\colorbox {gray!20}{\\texttt {UNK}}$ -predicted words through unchanged exposes the downstream model to potentially corrupted text, we consider backing off to a neutral word like `a', which has a similar distribution across classes.",
"Backoff to background model: We also consider falling back upon a more generic word recognition model trained upon a larger, less-specialized corpus whenever the foreground word recognition model predicts UNK. Figure 1 depicts this scenario pictorially."
],
"extractive_spans": [],
"free_form_answer": "In pass-through, the recognizer passes on the possibly misspelled word, backoff to neutral word backs off to a word with similar distribution across classes and backoff to background model backs off to a more generic word recognition model trained with larger and less specialized corpus.",
"highlighted_evidence": [
"Below, we explore different ways to back off when the ScRNN predicts UNK (a frequent outcome for rare and unseen words):\n\nPass-through: word-recognizer passes on the (possibly misspelled) word as is.\n\nBackoff to neutral word: Alternatively, noting that passing $\\colorbox {gray!20}{\\texttt {UNK}}$ -predicted words through unchanged exposes the downstream model to potentially corrupted text, we consider backing off to a neutral word like `a', which has a similar distribution across classes.\n\nBackoff to background model: We also consider falling back upon a more generic word recognition model trained upon a larger, less-specialized corpus whenever the foreground word recognition model predicts UNK. Figure 1 depicts this scenario pictorially."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"While BIBREF7 demonstrate strong word recognition performance, a drawback of their evaluation setup is that they only attack and evaluate on the subset of words that are a part of their training vocabulary. In such a setting, the word recognition performance is unreasonably dependent on the chosen vocabulary size. In principle, one can design models to predict (correctly) only a few chosen words, and ignore the remaining majority and still reach 100% accuracy. For the adversarial setting, rare and unseen words in the wild are particularly critical, as they provide opportunities for the attackers. A reliable word-recognizer should handle these cases gracefully. Below, we explore different ways to back off when the ScRNN predicts UNK (a frequent outcome for rare and unseen words):",
"Pass-through: word-recognizer passes on the (possibly misspelled) word as is.",
"Backoff to neutral word: Alternatively, noting that passing $\\colorbox {gray!20}{\\texttt {UNK}}$ -predicted words through unchanged exposes the downstream model to potentially corrupted text, we consider backing off to a neutral word like `a', which has a similar distribution across classes.",
"Backoff to background model: We also consider falling back upon a more generic word recognition model trained upon a larger, less-specialized corpus whenever the foreground word recognition model predicts UNK. Figure 1 depicts this scenario pictorially."
],
"extractive_spans": [],
"free_form_answer": "Pass-through passes the possibly misspelled word as is, backoff to neutral word backs off to a word with similar distribution across classes and backoff to background model backs off to a more generic word recognition model trained with larger and less specialized corpus.",
"highlighted_evidence": [
"Below, we explore different ways to back off when the ScRNN predicts UNK (a frequent outcome for rare and unseen words):\n\nPass-through: word-recognizer passes on the (possibly misspelled) word as is.\n\nBackoff to neutral word: Alternatively, noting that passing $\\colorbox {gray!20}{\\texttt {UNK}}$ -predicted words through unchanged exposes the downstream model to potentially corrupted text, we consider backing off to a neutral word like `a', which has a similar distribution across classes.\n\nBackoff to background model: We also consider falling back upon a more generic word recognition model trained upon a larger, less-specialized corpus whenever the foreground word recognition model predicts UNK. Figure 1 depicts this scenario pictorially."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"While BIBREF7 demonstrate strong word recognition performance, a drawback of their evaluation setup is that they only attack and evaluate on the subset of words that are a part of their training vocabulary. In such a setting, the word recognition performance is unreasonably dependent on the chosen vocabulary size. In principle, one can design models to predict (correctly) only a few chosen words, and ignore the remaining majority and still reach 100% accuracy. For the adversarial setting, rare and unseen words in the wild are particularly critical, as they provide opportunities for the attackers. A reliable word-recognizer should handle these cases gracefully. Below, we explore different ways to back off when the ScRNN predicts UNK (a frequent outcome for rare and unseen words):",
"Pass-through: word-recognizer passes on the (possibly misspelled) word as is.",
"Backoff to neutral word: Alternatively, noting that passing $\\colorbox {gray!20}{\\texttt {UNK}}$ -predicted words through unchanged exposes the downstream model to potentially corrupted text, we consider backing off to a neutral word like `a', which has a similar distribution across classes.",
"Backoff to background model: We also consider falling back upon a more generic word recognition model trained upon a larger, less-specialized corpus whenever the foreground word recognition model predicts UNK. Figure 1 depicts this scenario pictorially."
],
"extractive_spans": [],
"free_form_answer": "Backoff to \"a\" when an UNK-predicted word is encountered, backoff to a more generic word recognition model when the model predicts UNK",
"highlighted_evidence": [
"Below, we explore different ways to back off when the ScRNN predicts UNK (a frequent outcome for rare and unseen words):\n\nPass-through: word-recognizer passes on the (possibly misspelled) word as is.\n\nBackoff to neutral word: Alternatively, noting that passing $\\colorbox {gray!20}{\\texttt {UNK}}$ -predicted words through unchanged exposes the downstream model to potentially corrupted text, we consider backing off to a neutral word like `a', which has a similar distribution across classes.\n\nBackoff to background model: We also consider falling back upon a more generic word recognition model trained upon a larger, less-specialized corpus whenever the foreground word recognition model predicts UNK. Figure 1 depicts this scenario pictorially."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"4857c606a55a83454e8d81ffe17e05cf8bc4b75f",
"c7d4a630661cd719ea504dba56393f78278b296b"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity",
"infinity",
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no",
"no",
"no"
],
"question": [
"What does the \"sensitivity\" quantity denote?",
"What end tasks do they evaluate on?",
"What is a semicharacter architecture?",
"Do they experiment with offering multiple candidate corrections and voting on the model output, since this seems highly likely to outperform a one-best correction?",
"Why is the adversarial setting appropriate for misspelling recognition?",
"Why do they experiment with RNNs instead of transformers for this task?",
"How do the backoff strategies work?"
],
"question_id": [
"0d9fcc715dee0ec85132b3f4a730d7687b6a06f4",
"8910ee2236a497c92324bbbc77c596dba39efe46",
"2c59528b6bc5b5dc28a7b69b33594b274908cca6",
"6b367775a081f4d2423dc756c9b65b6eef350345",
"bc01853512eb3c11528e33003ceb233d7c1d7038",
"67ec8ef85844e01746c13627090dc2706bb2a4f3",
"ba539cab80d25c3e20f39644415ed48b9e4e4185"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"",
"",
"",
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Table 1: Adversarial spelling mistakes inducing sentiment misclassification and word-recognition defenses.",
"Figure 1: A schematic sketch of our proposed word recognition system, consisting of a foreground and a background model. We train the foreground model on the smaller, domain-specific dataset, and the background model on a larger dataset (e.g., the IMDB movie corpus). We train both models to reconstruct the correct word from the orthography and context of the individual words, using synthetically corrupted inputs during training. Subse-",
"Table 2: Word Error Rates (WER) of ScRNN with each backoff strategy, plus ATD and an ScRNN trained only on the background corpus (78K vocabulary) The error rates include 5.25% OOV words.",
"Table 3: Accuracy of various classification models, with and without defenses, under adversarial attacks. Even 1-character attacks significantly degrade classifier performance. Our defenses confer robustness, recovering over 76% of the original accuracy, under the ‘all’ setting for all four model classes.",
"Table 4: Accuracy of BERT, with and without defenses, on MRPC when attacked under the ‘all’ attack setting.",
"Figure 2: Effect of sensitivity and word error rate on robustness (depicted by the bubble sizes) in word-only models (left) and char-only models (right).",
"Table 5: Sensitivity values for word recognizers. Neutral backoff shows lowest sensitivity."
],
"file": [
"1-Table1-1.png",
"4-Figure1-1.png",
"5-Table2-1.png",
"7-Table3-1.png",
"7-Table4-1.png",
"8-Figure2-1.png",
"8-Table5-1.png"
]
} | [
"What does the \"sensitivity\" quantity denote?",
"What end tasks do they evaluate on?",
"What is a semicharacter architecture?",
"Why is the adversarial setting appropriate for misspelling recognition?",
"How do the backoff strategies work?"
] | [
[
"1905.11268-Model Sensitivity-1"
],
[
"1905.11268-Robustness to adversarial attacks-3",
"1905.11268-Robustness to adversarial attacks-1"
],
[
"1905.11268-ScRNN with Backoff-1",
"1905.11268-Introduction-5"
],
[
"1905.11268-Introduction-1"
],
[
"1905.11268-ScRNN with Backoff-4",
"1905.11268-ScRNN with Backoff-5",
"1905.11268-ScRNN with Backoff-2",
"1905.11268-ScRNN with Backoff-3"
]
] | [
"The expected number of unique outputs a word recognition system assigns to a set of adversarial perturbations ",
"Sentiment analysis and paraphrase detection under adversarial attacks",
"A semi-character based RNN (ScRNN) treats the first and last characters individually, and is agnostic to the ordering of the internal characters",
"Adversarial misspellings are a real-world problem",
"Backoff to \"a\" when an UNK-predicted word is encountered, backoff to a more generic word recognition model when the model predicts UNK"
] | 169 |
1908.04042 | Evaluating Tag Recommendations for E-Book Annotation Using a Semantic Similarity Metric | In this paper, we present our work to support publishers and editors in finding descriptive tags for e-books through tag recommendations. We propose a hybrid tag recommendation system for e-books, which leverages search query terms from Amazon users and e-book metadata, which is assigned by publishers and editors. Our idea is to mimic the vocabulary of users in Amazon, who search for and review e-books, and to combine these search terms with editor tags in a hybrid tag recommendation approach. In total, we evaluate 19 tag recommendation algorithms on the review content of Amazon users, which reflects the readers' vocabulary. Our results show that we can improve the performance of tag recommender systems for e-books both concerning tag recommendation accuracy, diversity as well as a novel semantic similarity metric, which we also propose in this paper. | {
"paragraphs": [
[
"When people shop for books online in e-book stores such as, e.g., the Amazon Kindle store, they enter search terms with the goal to find e-books that meet their preferences. Such e-books have a variety of metadata such as, e.g., title, author or keywords, which can be used to retrieve e-books that are relevant to the query. As a consequence, from the perspective of e-book publishers and editors, annotating e-books with tags that best describe the content and which meet the vocabulary of users (e.g., when searching and reviewing e-books) is an essential task BIBREF0 .",
"Problem and aim of this work. Annotating e-books with suitable tags is, however, a complex task as users' vocabulary may differ from the one of editors. Such a vocabulary mismatch yet hinders effective organization and retrieval BIBREF1 of e-books. For example, while editors mostly annotate e-books with descriptive tags that reflect the book's content, Amazon users often search for parts of the book title. In the data we use for the present study (see Section SECREF2 ), we find that around 30% of the Amazon search terms contain parts of e-book titles.",
"In this paper, we present our work to support editors in the e-book annotation process with tag recommendations BIBREF2 , BIBREF3 . Our idea is to exploit user-generated search query terms in Amazon to mimic the vocabulary of users in Amazon, who search for e-books. We combine these search terms with tags assigned by editors in a hybrid tag recommendation approach. Thus, our aim is to show that we can improve the performance of tag recommender systems for e-books both concerning recommendation accuracy as well as semantic similarity and tag recommendation diversity.",
"Related work. In tag recommender systems, mostly content-based algorithms (e.g., BIBREF4 , BIBREF5 ) are used to recommend tags to annotate resources such as e-books. In our work, we incorporate both content features of e-books (i.e., title and description text) as well as Amazon search terms to account for the vocabulary of e-book readers.",
"Concerning the evaluation of tag recommendation systems, most studies focus on measuring the accuracy of tag recommendations (e.g., BIBREF2 ). However, the authors of BIBREF6 suggest also to use beyond-accuracy metrics such as diversity to evaluate the quality of tag recommendations. In our work, we measure recommendation diversity in addition to recommendation accuracy and propose a novel metric termed semantic similarity to validate semantic matches of tag recommendations.",
"Approach and findings. We exploit editor tags and user-generated search terms as input for tag recommendation approaches. Our evaluation comprises of a rich set of 19 different algorithms to recommend tags for e-books, which we group into (i) popularity-based, (ii) similarity-based (i.e., using content information), and (iii) hybrid approaches. We evaluate our approaches in terms of accuracy, semantic similarity and diversity on the review content of Amazon users, which reflects the readers' vocabulary. With semantic similarity, we measure how semantically similar (based on learned Doc2Vec BIBREF7 embeddings) the list of recommended tags is to the list of relevant tags. We use this additional metric to measure not only exact “hits” of our recommendations but also semantic matches.",
"Our evaluation results show that combining both data sources enhances the quality of tag recommendations for annotating e-books. Furthermore, approaches that solely train on Amazon search terms provide poor performance in terms of accuracy but deliver good results in terms of semantic similarity and recommendation diversity."
],
[
"In this section, we describe our dataset as well as our tag recommendation approaches we propose to annotate e-books."
],
[
"Our dataset contains two sources of data, one to generate tag recommendations and another one to evaluate tag recommendations. HGV GmbH has collected all data sources and we provide the dataset statistics in Table TABREF3 .",
"Data used to generate recommendations. We employ two sources of e-book annotation data: (i) editor tags, and (ii) Amazon search terms. For editor tags, we collect data of 48,705 e-books from 13 publishers, namely Kunstmann, Delius-Klasnig, VUR, HJR, Diogenes, Campus, Kiwi, Beltz, Chbeck, Rowohlt, Droemer, Fischer and Neopubli. Apart from the editor tags, this data contains metadata fields of e-books such as the ISBN, the title, a description text, the author and a list of BISACs, which are identifiers for book categories.",
"For the Amazon search terms, we collect search query logs of 21,243 e-books for 12 months (i.e., November 2017 to October 2018). Apart from the search terms, this data contains the e-books' ISBNs, titles and description texts.",
"Table TABREF3 shows that the overlap of e-books that have editor tags and Amazon search terms is small (i.e., only 497). Furthermore, author and BISAC (i.e., the book category identifier) information are primarily available for e-books that contain editor tags. Consequently, both data sources provide complementary information, which underpins the intention of this work, i.e., to evaluate tag recommendation approaches using annotation sources from different contexts.",
"Data used to evaluate recommendations. For evaluation, we use a third set of e-book annotations, namely Amazon review keywords. These review keywords are extracted from the Amazon review texts and are typically provided in the review section of books on Amazon. Our idea is to not favor one or the other data source (i.e., editor tags and Amazon search terms) when evaluating our approaches against expected tags. At the same time, we consider Amazon review keywords to be a good mixture of editor tags and search terms as they describe both the content and the users' opinions on the e-books (i.e., the readers' vocabulary). As shown in Table TABREF3 , we collect Amazon review keywords for 2,896 e-books (publishers: Kiwi, Rowohlt, Fischer, and Droemer), which leads to 33,663 distinct review keywords and on average 30 keyword assignments per e-book."
],
[
"We implement three types of tag recommendation approaches, i.e., (i) popularity-based, (ii) similarity-based (i.e., using content information), and (iii) hybrid approaches. Due to the lack of personalized tags (i.e., we do not know which user has assigned a tag), we do not implement other types of algorithms such as collaborative filtering BIBREF8 . In total, we evaluate 19 different algorithms to recommend tags for annotating e-books.",
"",
"Popularity-based approaches. We recommend the most frequently used tags in the dataset, which is a common strategy for tag recommendations BIBREF9 . That is, a most popular INLINEFORM0 approach for editor tags and a most popular INLINEFORM1 approach for Amazon search terms. For e-books, for which we also have author (= INLINEFORM2 and INLINEFORM3 ) or BISAC (= INLINEFORM4 and INLINEFORM5 ) information, we use these features to further filter the recommended tags, i.e., to only recommend tags that were used to annotate e-books of a specific author or a specific BISAC.",
"We combine both data sources (i.e., editor tags and Amazon search terms) using a round-robin combination strategy, which ensures an equal weight for both sources. This gives us three additional popularity-based algorithms (= INLINEFORM0 , INLINEFORM1 and INLINEFORM2 ).",
"",
"Similarity-based approaches. We exploit the textual content of e-books (i.e., description or title) to recommend relevant tags BIBREF10 . For this, we first employ a content-based filtering approach BIBREF11 based on TF-IDF BIBREF12 to find top- INLINEFORM0 similar e-books. For each of the similar e-books, we then either extract the assigned editor tags (= INLINEFORM2 and INLINEFORM3 ) or the Amazon search terms (= INLINEFORM4 and INLINEFORM5 ). To combine the tags of the top- INLINEFORM6 similar e-books, we use the cross-source algorithm BIBREF13 , which favors tags that were used to annotate more than one similar e-book (i.e., tags that come from multiple recommendation sources). The final tag relevancy is calculated as: DISPLAYFORM0 ",
"where INLINEFORM0 denotes the number of distinct e-books, which yielded the recommendation of tag INLINEFORM1 , to favor tags that come from multiple sources and INLINEFORM2 is the similarity score of the corresponding e-book. We again use a round-robin strategy to combine both data sources (= INLINEFORM3 and INLINEFORM4 ).",
"",
"Hybrid approaches. We use the previously mentioned cross-source algorithm BIBREF13 to construct four hybrid recommendation approaches. In this case, tags are favored that are recommended by more than one algorithm.",
"Hence, to create a popularity-based hybrid (= INLINEFORM0 ), we combine the best three performing popularity-based approaches from the ones (i) without any contextual signal, (ii) with the author as context, and (iii) with BISAC as context. In the case of the similarity-based hybrid (= INLINEFORM1 ), we utilize the two best performing similarity-based approaches from the ones (i) which use the title, and (ii) which use the description text. We further define INLINEFORM2 , a hybrid approach that combines the three popularity-based methods of INLINEFORM3 and the two similarity-based approaches of INLINEFORM4 . Finally, we define INLINEFORM5 as a hybrid approach that uses the best performing popularity-based and the best performing similarity-based approach (see Figure FIGREF11 in Section SECREF4 for more details about the particular algorithm combinations)."
],
[
"In this section, we describe our evaluation protocol as well as the measures we use to evaluate and compare our tag recommendation approaches."
],
[
"For evaluation, we use the third set of e-book annotations, namely Amazon review keywords. As described in Section SECREF1 , these review keywords are extracted from the Amazon review texts and thus, reflect the users' vocabulary. We evaluate our approaches for the 2,896 e-books, for whom we got review keywords. To follow common practice for tag recommendation evaluation BIBREF14 , we predict the assigned review keywords (= our test set) for respective e-books."
],
[
"In this work, we measure (i) recommendation accuracy, (ii) semantic similarity, and (iii) recommendation diversity to evaluate the quality of our approaches from different perspectives.",
"",
"Recommendation accuracy. We use Normalized Discounted Cumulative Gain (nDCG) BIBREF15 to measure the accuracy of the tag recommendation approaches. The nDCG measure is a standard ranking-dependent metric that not only measures how many tags can be correctly predicted but also takes into account their position in the recommendation list with length of INLINEFORM0 . It is based on the Discounted Cummulative Gain, which is given by: DISPLAYFORM0 ",
"where INLINEFORM0 is a function that returns 1 if the recommended tag at position INLINEFORM1 in the recommended list is relevant. We then calculate DCG@ INLINEFORM2 for every evaluated e-book by dividing DCG@ INLINEFORM3 with the ideal DCG value iDCG@ INLINEFORM4 , which is the highest possible DCG value that can be achieved if all the relevant tags would be recommended in the correct order. It is given by the following formula BIBREF15 : DISPLAYFORM0 ",
"",
"Semantic similarity. One precondition of standard recommendation accuracy measures is that to generate a “hit”, the recommended tag needs to be an exact syntactical match to the one from the test set. When tags are recommended from one data source and compared to tags from another source, this can be problematic. For example, if we recommend the tag “victim” but expect the tag “prey”, we would mark this as a mismatch, therefore being a bad recommendation. But if we know that the corresponding e-book is a crime novel, the recommended tag would be (semantically) descriptive to reflect the book's content. Hence, in this paper, we propose to additionally measure the semantic similarity between recommended tags and tags from the test set (i.e., the Amazon review keywords).",
"Over the last four years, there have been several notable publications in the area of applying deep learning to uncover semantic relationships between textual content (e.g., by learning word embeddings with Word2Vec BIBREF16 , BIBREF17 ). Based on this, we propose an alternative measure of recommendation quality by learning the semantic relationships from both vocabularies and then using it to compare how semantically similar the recommended tags are to the expected review keywords. For this, we first extract the textual content in the form of the description text, title, editor tags and Amazon search terms of e-books from our dataset. We then train a Doc2Vec BIBREF7 model on the content. Then, we use the model to infer the latent representation for both the complete list of recommended tags as well as the list of expected tags from the test set. Finally, we use the cosine similarity measure to calculate how semantically similar these two lists are.",
"",
"Recommendation diversity. As defined in BIBREF18 , we calculate recommendation diversity as the average dissimilarity of all pairs of tags in the list of recommended tags. Thus, given a distance function INLINEFORM0 that corresponds to the dissimilarity between two tags INLINEFORM1 and INLINEFORM2 in the list of recommended tags, INLINEFORM3 is given as the average dissimilarity of all pairs of tags: DISPLAYFORM0 ",
" where INLINEFORM0 is the number of evaluated e-books and the dissimilarity function is defined as INLINEFORM1 . In our experiments, we use the previously trained Doc2Vec model to extract the latent representation of a specific tag. The similarity of two tags INLINEFORM2 is then calculated with the Cosine similarity measure using the latent vector representations of respective tags INLINEFORM3 and INLINEFORM4 ."
],
[
"Concerning tag recommendation accuracy, in this section, we report results for different values of INLINEFORM0 (i.e., number of recommended tags). For the beyond-accuracy experiment, we use the full list of recommended tags (i.e., INLINEFORM1 )."
],
[
"Figure FIGREF11 shows the results of the accuracy experiment for the (i) popularity-based, (ii) similarity-based, and (iii) hybrid tag recommendation approaches.",
"",
"Popularity-based approaches. In Figure FIGREF11 , we see that popularity-based approaches based on editor tags tend to perform better than if trained on Amazon search terms. If we take into account contextual information like BISAC or author, we can further improve accuracy in terms of INLINEFORM0 . That is, we find that using popular tags from e-books of a specific author leads to the best accuracy of the popularity-based approaches. This suggests that editors and readers do seem to reuse tags for e-books of same authors. If we use both editor tags and Amazon search terms, we can further increase accuracy, especially for higher values of INLINEFORM1 like in the case of INLINEFORM2 . This is, however, not the case for INLINEFORM3 as the accuracy of the integrated INLINEFORM4 approach is low. The reason for this is the limited amount of e-books from within the Amazon search query logs that have BISAC information (i.e., only INLINEFORM5 ).",
"",
"Similarity-based approaches. We further improve accuracy if we first find similar e-books and then extract their top- INLINEFORM0 tags in a cross-source manner as described in Section SECREF4 .",
"As shown in Figure FIGREF11 , using the description text to find similar e-books results in more accurate tag recommendations than using the title (i.e., INLINEFORM0 for INLINEFORM1 ). This is somehow expected as the description text consists of a bigger corpus of words (i.e., multiple sentences) than the title. Concerning the collected Amazon search query logs, extracting and then recommending tags from this source results in a much lower accuracy performance. Thus, these results also suggest to investigate beyond-accuracy metrics as done in Section SECREF17 .",
"",
"Hybrid approaches. Figure FIGREF11 shows the accuracy results of the four hybrid approaches. By combining the best three popularity-based approaches, we outperform all of the initially evaluated popularity algorithms (i.e., INLINEFORM0 for INLINEFORM1 ). On the contrary, the combination of the two best performing similarity-based approaches INLINEFORM2 and INLINEFORM3 does not yield better accuracy. The negative impact of using a lower-performing approach such as INLINEFORM4 within a hybrid combination can also be observed in INLINEFORM5 for lower values of INLINEFORM6 . Overall, this confirms our initial intuition that combining the best performing popularity-based approach with the best similarity-based approach should result in the highest accuracy (i.e., INLINEFORM7 for INLINEFORM8 ). Moreover, our goal, namely to exploit editor tags in combination with search terms used by readers to increase the metadata quality of e-books, is shown to be best supported by applying hybrid approaches as they provide the best prediction results."
],
[
"Figure FIGREF16 illustrates the results of the experiments, which measure the recommendation impact beyond-accuracy.",
"",
"Semantic similarity. Figure FIGREF16 illustrates the results of our proposed semantic similarity measure. To compare our proposed measure to standard accuracy measures such as INLINEFORM0 , we use Kendall's Tau rank correlation BIBREF19 as suggested by BIBREF20 for automatic evaluation of information-ordering tasks. From that, we rank our recommendation approaches according to both accuracy and semantic similarity and calculate the relation between both rankings. This results in INLINEFORM1 with a p-value < INLINEFORM2 , which suggests a high correlation between the semantic similarity and the standard accuracy measure.",
"Therefore, the semantic similarity measure helps us interpret the recommendation quality. For instance, we achieve the lowest INLINEFORM0 values with the similarity-based approaches that recommend Amazon search terms (i.e., INLINEFORM1 and INLINEFORM2 ). When comparing these results with others from Figure FIGREF11 , a conclusion could be quickly drawn that the recommended tags are merely unusable. However, by looking at Figure FIGREF16 , we see that, although these approaches do not provide the highest recommendation accuracy, they still result in tag recommendations that are semantically related at a high degree to the expected annotations from the test set. Overall, this suggests that approaches, which provide a poor accuracy performance concerning INLINEFORM4 but provide a good performance regarding semantic similarity could still be helpful for annotating e-books.",
"",
"Recommendation diversity. Figure FIGREF16 shows the diversity of the tag recommendation approaches. We achieve the highest diversity with the similarity-based approaches, which extract Amazon search terms. Their accuracy is, however, very low. Thus, the combination of the two vocabularies can provide a good trade-off between recommendation accuracy and diversity."
],
[
"In this paper, we present our work to support editors in the e-book annotation process. Specifically, we aim to provide tag recommendations that incorporate both the vocabulary of the editors and e-book readers. Therefore, we train various configurations of tag recommender approaches on editors' tags and Amazon search terms and evaluate them on a dataset containing Amazon review keywords. We find that combining both data sources enhances the quality of tag recommendations for annotating e-books. Furthermore, while approaches that train only on Amazon search terms provide poor performance concerning recommendation accuracy, we show that they still offer helpful annotations concerning recommendation diversity as well as our novel semantic similarity metric.",
"",
"Future work. For future work, we plan to validate our findings using another dataset, e.g., by recommending tags for scientific articles and books in BibSonomy. With this, we aim to demonstrate the usefulness of the proposed approach in a similar domain and to enhance the reproducibility of our results by using an open dataset.",
"Moreover, we plan to evaluate our tag recommendation approaches in a study with domain users. Also, we want to improve our similarity-based approaches by integrating novel embedding approaches BIBREF16 , BIBREF17 as we did, for example, with our proposed semantic similarity evaluation metric. Finally, we aim to incorporate explanations for recommended tags so that editors of e-book annotations receive additional support in annotating e-books BIBREF21 . By making the underlying (semantic) reasoning visible to the editor who is in charge of tailoring annotations, we aim to support two goals: (i) allowing readers to discover e-books more efficiently, and (ii) enabling publishers to leverage semi-automatic categorization processes for e-books. In turn, providing explanations fosters control over which vocabulary to choose when tagging e-books for different application contexts.",
"",
"Acknowledgments. The authors would like to thank Peter Langs, Jan-Philipp Wolf and Alyona Schraa from HGV GmbH for providing the e-book annotation data. This work was funded by the Know-Center GmbH (FFG COMET Program), the FFG Data Market Austria project and the AI4EU project (EU grant 825619). The Know-Center GmbH is funded within the Austrian COMET Program - Competence Centers for Excellent Technologies - under the auspices of the Austrian Ministry of Transport, Innovation and Technology, the Austrian Ministry of Economics and Labor and by the State of Styria. COMET is managed by the Austrian Research Promotion Agency (FFG)."
]
],
"section_name": [
"Introduction",
"Method",
"Dataset",
"Tag Recommendation Approaches",
"Experimental Setup",
"Evaluation Protocol",
"Evaluation Metrics",
"Results",
"Recommendation Accuracy Evaluation",
"Beyond-Accuracy Evaluation",
"Conclusion and Future Work"
]
} | {
"answers": [
{
"annotation_id": [
"0220e3af88a8062518e45a40e5b721d655eca090",
"e3a99d855e7718ecb7b6dcd798277c47203810aa"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [
"Data used to generate recommendations. We employ two sources of e-book annotation data: (i) editor tags, and (ii) Amazon search terms. For editor tags, we collect data of 48,705 e-books from 13 publishers, namely Kunstmann, Delius-Klasnig, VUR, HJR, Diogenes, Campus, Kiwi, Beltz, Chbeck, Rowohlt, Droemer, Fischer and Neopubli. Apart from the editor tags, this data contains metadata fields of e-books such as the ISBN, the title, a description text, the author and a list of BISACs, which are identifiers for book categories."
],
"extractive_spans": [
"48,705"
],
"free_form_answer": "",
"highlighted_evidence": [
"For editor tags, we collect data of 48,705 e-books from 13 publishers, namely Kunstmann, Delius-Klasnig, VUR, HJR, Diogenes, Campus, Kiwi, Beltz, Chbeck, Rowohlt, Droemer, Fischer and Neopubli. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"4fcd5fec657a87a2da9afdd900961d34eedbaac7"
],
"answer": [
{
"evidence": [
"Hybrid approaches. Figure FIGREF11 shows the accuracy results of the four hybrid approaches. By combining the best three popularity-based approaches, we outperform all of the initially evaluated popularity algorithms (i.e., INLINEFORM0 for INLINEFORM1 ). On the contrary, the combination of the two best performing similarity-based approaches INLINEFORM2 and INLINEFORM3 does not yield better accuracy. The negative impact of using a lower-performing approach such as INLINEFORM4 within a hybrid combination can also be observed in INLINEFORM5 for lower values of INLINEFORM6 . Overall, this confirms our initial intuition that combining the best performing popularity-based approach with the best similarity-based approach should result in the highest accuracy (i.e., INLINEFORM7 for INLINEFORM8 ). Moreover, our goal, namely to exploit editor tags in combination with search terms used by readers to increase the metadata quality of e-books, is shown to be best supported by applying hybrid approaches as they provide the best prediction results."
],
"extractive_spans": [],
"free_form_answer": "A hybrid model consisting of best performing popularity-based approach with the best similarity-based approach",
"highlighted_evidence": [
"Overall, this confirms our initial intuition that combining the best performing popularity-based approach with the best similarity-based approach should result in the highest accuracy (i.e., INLINEFORM7 for INLINEFORM8 )."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"d2cfcf7026276f37987b7046f0254de03b31794a",
"efcf2d9d61895fe92fc32957b54c1eaf84c3b9e1"
],
"answer": [
{
"evidence": [
"Recommendation diversity. As defined in BIBREF18 , we calculate recommendation diversity as the average dissimilarity of all pairs of tags in the list of recommended tags. Thus, given a distance function INLINEFORM0 that corresponds to the dissimilarity between two tags INLINEFORM1 and INLINEFORM2 in the list of recommended tags, INLINEFORM3 is given as the average dissimilarity of all pairs of tags: DISPLAYFORM0"
],
"extractive_spans": [
"average dissimilarity of all pairs of tags in the list of recommended tags"
],
"free_form_answer": "",
"highlighted_evidence": [
"Recommendation diversity. As defined in BIBREF18 , we calculate recommendation diversity as the average dissimilarity of all pairs of tags in the list of recommended tags."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Recommendation diversity. As defined in BIBREF18 , we calculate recommendation diversity as the average dissimilarity of all pairs of tags in the list of recommended tags. Thus, given a distance function INLINEFORM0 that corresponds to the dissimilarity between two tags INLINEFORM1 and INLINEFORM2 in the list of recommended tags, INLINEFORM3 is given as the average dissimilarity of all pairs of tags: DISPLAYFORM0"
],
"extractive_spans": [
" the average dissimilarity of all pairs of tags in the list of recommended tags"
],
"free_form_answer": "",
"highlighted_evidence": [
"As defined in BIBREF18 , we calculate recommendation diversity as the average dissimilarity of all pairs of tags in the list of recommended tags. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"54d8515e74481154344c457dc8740890421aaae5",
"61ee2a8486d51c153d6f3aea0ca4661670aff13a"
],
"answer": [
{
"evidence": [
"Data used to evaluate recommendations. For evaluation, we use a third set of e-book annotations, namely Amazon review keywords. These review keywords are extracted from the Amazon review texts and are typically provided in the review section of books on Amazon. Our idea is to not favor one or the other data source (i.e., editor tags and Amazon search terms) when evaluating our approaches against expected tags. At the same time, we consider Amazon review keywords to be a good mixture of editor tags and search terms as they describe both the content and the users' opinions on the e-books (i.e., the readers' vocabulary). As shown in Table TABREF3 , we collect Amazon review keywords for 2,896 e-books (publishers: Kiwi, Rowohlt, Fischer, and Droemer), which leads to 33,663 distinct review keywords and on average 30 keyword assignments per e-book."
],
"extractive_spans": [
"33,663"
],
"free_form_answer": "",
"highlighted_evidence": [
"As shown in Table TABREF3 , we collect Amazon review keywords for 2,896 e-books (publishers: Kiwi, Rowohlt, Fischer, and Droemer), which leads to 33,663 distinct review keywords and on average 30 keyword assignments per e-book."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Data used to evaluate recommendations. For evaluation, we use a third set of e-book annotations, namely Amazon review keywords. These review keywords are extracted from the Amazon review texts and are typically provided in the review section of books on Amazon. Our idea is to not favor one or the other data source (i.e., editor tags and Amazon search terms) when evaluating our approaches against expected tags. At the same time, we consider Amazon review keywords to be a good mixture of editor tags and search terms as they describe both the content and the users' opinions on the e-books (i.e., the readers' vocabulary). As shown in Table TABREF3 , we collect Amazon review keywords for 2,896 e-books (publishers: Kiwi, Rowohlt, Fischer, and Droemer), which leads to 33,663 distinct review keywords and on average 30 keyword assignments per e-book."
],
"extractive_spans": [
"33,663 distinct review keywords "
],
"free_form_answer": "",
"highlighted_evidence": [
"As shown in Table TABREF3 , we collect Amazon review keywords for 2,896 e-books (publishers: Kiwi, Rowohlt, Fischer, and Droemer), which leads to 33,663 distinct review keywords and on average 30 keyword assignments per e-book."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"2de2a5f33ccb9e07d541a374e2ca12b3c0025231",
"774003ff4af880f639ebb79d5ab7c4ae1b19d5d3"
],
"answer": [
{
"evidence": [
"Data used to generate recommendations. We employ two sources of e-book annotation data: (i) editor tags, and (ii) Amazon search terms. For editor tags, we collect data of 48,705 e-books from 13 publishers, namely Kunstmann, Delius-Klasnig, VUR, HJR, Diogenes, Campus, Kiwi, Beltz, Chbeck, Rowohlt, Droemer, Fischer and Neopubli. Apart from the editor tags, this data contains metadata fields of e-books such as the ISBN, the title, a description text, the author and a list of BISACs, which are identifiers for book categories.",
"For the Amazon search terms, we collect search query logs of 21,243 e-books for 12 months (i.e., November 2017 to October 2018). Apart from the search terms, this data contains the e-books' ISBNs, titles and description texts."
],
"extractive_spans": [
"48,705 e-books from 13 publishers",
"search query logs of 21,243 e-books for 12 months"
],
"free_form_answer": "",
"highlighted_evidence": [
"We employ two sources of e-book annotation data: (i) editor tags, and (ii) Amazon search terms. For editor tags, we collect data of 48,705 e-books from 13 publishers, namely Kunstmann, Delius-Klasnig, VUR, HJR, Diogenes, Campus, Kiwi, Beltz, Chbeck, Rowohlt, Droemer, Fischer and Neopubli. ",
"For the Amazon search terms, we collect search query logs of 21,243 e-books for 12 months (i.e., November 2017 to October 2018). "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Data used to generate recommendations. We employ two sources of e-book annotation data: (i) editor tags, and (ii) Amazon search terms. For editor tags, we collect data of 48,705 e-books from 13 publishers, namely Kunstmann, Delius-Klasnig, VUR, HJR, Diogenes, Campus, Kiwi, Beltz, Chbeck, Rowohlt, Droemer, Fischer and Neopubli. Apart from the editor tags, this data contains metadata fields of e-books such as the ISBN, the title, a description text, the author and a list of BISACs, which are identifiers for book categories.",
"Data used to evaluate recommendations. For evaluation, we use a third set of e-book annotations, namely Amazon review keywords. These review keywords are extracted from the Amazon review texts and are typically provided in the review section of books on Amazon. Our idea is to not favor one or the other data source (i.e., editor tags and Amazon search terms) when evaluating our approaches against expected tags. At the same time, we consider Amazon review keywords to be a good mixture of editor tags and search terms as they describe both the content and the users' opinions on the e-books (i.e., the readers' vocabulary). As shown in Table TABREF3 , we collect Amazon review keywords for 2,896 e-books (publishers: Kiwi, Rowohlt, Fischer, and Droemer), which leads to 33,663 distinct review keywords and on average 30 keyword assignments per e-book."
],
"extractive_spans": [],
"free_form_answer": " E-book annotation data: editor tags, Amazon search terms, and Amazon review keywords.",
"highlighted_evidence": [
"Data used to generate recommendations. We employ two sources of e-book annotation data: (i) editor tags, and (ii) Amazon search terms. ",
"Data used to evaluate recommendations. For evaluation, we use a third set of e-book annotations, namely Amazon review keywords. These review keywords are extracted from the Amazon review texts and are typically provided in the review section of books on Amazon. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"3f761bb5857701886da42838934df5de71626245"
],
"answer": [
{
"evidence": [
"Approach and findings. We exploit editor tags and user-generated search terms as input for tag recommendation approaches. Our evaluation comprises of a rich set of 19 different algorithms to recommend tags for e-books, which we group into (i) popularity-based, (ii) similarity-based (i.e., using content information), and (iii) hybrid approaches. We evaluate our approaches in terms of accuracy, semantic similarity and diversity on the review content of Amazon users, which reflects the readers' vocabulary. With semantic similarity, we measure how semantically similar (based on learned Doc2Vec BIBREF7 embeddings) the list of recommended tags is to the list of relevant tags. We use this additional metric to measure not only exact “hits” of our recommendations but also semantic matches."
],
"extractive_spans": [
"popularity-based",
"similarity-based",
"hybrid"
],
"free_form_answer": "",
"highlighted_evidence": [
"Our evaluation comprises of a rich set of 19 different algorithms to recommend tags for e-books, which we group into (i) popularity-based, (ii) similarity-based (i.e., using content information), and (iii) hybrid approaches."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
}
],
"nlp_background": [
"",
"",
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
"",
"",
""
],
"question": [
"how many tags do they look at?",
"which algorithm was the highest performer?",
"how is diversity measured?",
"how large is the vocabulary?",
"what dataset was used?",
"what algorithms did they use?"
],
"question_id": [
"4dc268e3d482e504ca80d2ab514e68fd9b1c3af1",
"ab54cd2dc83141bad3cb3628b3f0feee9169a556",
"249c805ee6f2ebe4dbc972126b3d82fb09fa3556",
"b4f881331b975e6e4cab1868267211ed729d782d",
"79413ff5d98957c31866f22179283902650b5bb6",
"29c014baf99fb9f40b5171aab3e2c7f12a748f79"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
"",
"",
""
]
} | {
"caption": [
"Table 1: Statistics of our e-book annotation dataset used to generate and evaluate tag recommendations.",
"Figure 1: Accuracy results with respect to nDCG for (a) popularity-based, (b) similarity-based and (c) hybrid tag recommendation approaches. All results are reported for different numbers of recommended tags (i.e., k ∈ [1, 10]).",
"Figure 2: Beyond-accuracy evaluation results of our tag recommendation approaches. We use the list of 10 recommended tags to calculate the (a) semantic similarity, and (b) recommendation diversity. We provide the boxplots and the mean values for the approaches."
],
"file": [
"2-Table1-1.png",
"4-Figure1-1.png",
"5-Figure2-1.png"
]
} | [
"which algorithm was the highest performer?",
"what dataset was used?"
] | [
[
"1908.04042-Recommendation Accuracy Evaluation-7"
],
[
"1908.04042-Dataset-2",
"1908.04042-Dataset-4",
"1908.04042-Dataset-1"
]
] | [
"A hybrid model consisting of best performing popularity-based approach with the best similarity-based approach",
" E-book annotation data: editor tags, Amazon search terms, and Amazon review keywords."
] | 171 |
1610.00956 | Embracing data abundance: BookTest Dataset for Reading Comprehension | There is a practically unlimited amount of natural language data available. Still, recent work in text comprehension has focused on datasets which are small relative to current computing possibilities. This article is making a case for the community to move to larger data and as a step in that direction it is proposing the BookTest, a new dataset similar to the popular Children's Book Test (CBT), however more than 60 times larger. We show that training on the new data improves the accuracy of our Attention-Sum Reader model on the original CBT test data by a much larger margin than many recent attempts to improve the model architecture. On one version of the dataset our ensemble even exceeds the human baseline provided by Facebook. We then show in our own human study that there is still space for further improvement. | {
"paragraphs": [
[
"Since humans amass more and more generally available data in the form of unstructured text it would be very useful to teach machines to read and comprehend such data and then use this understanding to answer our questions. A significant amount of research has recently focused on answering one particular kind of questions the answer to which depends on understanding a context document. These are cloze-style questions BIBREF0 which require the reader to fill in a missing word in a sentence. An important advantage of such questions is that they can be generated automatically from a suitable text corpus which allows us to produce a practically unlimited amount of them. That opens the task to notoriously data-hungry deep-learning techniques which now seem to outperform all alternative approaches.",
"Two such large-scale datasets have recently been proposed by researchers from Google DeepMind and Facebook AI: the CNN/Daily Mail dataset BIBREF1 and the Children's Book Test (CBT) BIBREF2 respectively. These have attracted a lot of attention from the research community BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 with a new state-of-the-art model coming out every few weeks.",
"However if our goal is a production-level system actually capable of helping humans, we want the model to use all available resources as efficiently as possible. Given that",
"we believe that if the community is striving to bring the performance as far as possible, it should move its work to larger data.",
"This thinking goes in line with recent developments in the area of language modelling. For a long time models were being compared on several \"standard\" datasets with publications often presenting minuscule improvements in performance. Then the large-scale One Billion Word corpus dataset appeared BIBREF15 and it allowed Jozefowicz et al. to train much larger LSTM models BIBREF16 that almost halved the state-of-the-art perplexity on this dataset.",
"We think it is time to make a similar step in the area of text comprehension. Hence we are introducing the BookTest, a new dataset very similar to the Children's Book test but more than 60 times larger to enable training larger models even in the domain of text comprehension. Furthermore the methodology used to create our data can later be used to create even larger datasets when the need arises thanks to further technological progress.",
"We show that if we evaluate a model trained on the new dataset on the now standard Children's Book Test dataset, we see an improvement in accuracy much larger than other research groups achieved by enhancing the model architecture itself (while still using the original CBT training data). By training on the new dataset, we reduce the prediction error by almost one third. On the named-entity version of CBT this brings the ensemble of our models to the level of human baseline as reported by Facebook BIBREF2 . However in the final section we show in our own human study that there is still room for improvement on the CBT beyond the performance of our model."
],
[
"A natural way of testing a reader's comprehension of a text is to ask her a question the answer to which can be deduced from the text. Hence the task we are trying to solve consists of answering a cloze-style question, the answer to which depends on the understanding of a context document provided with the question. The model is also provided with a set of possible answers from which the correct one is to be selected. This can be formalized as follows:",
"The training data consist of tuples INLINEFORM0 , where INLINEFORM1 is a question, INLINEFORM2 is a document that contains the answer to question INLINEFORM3 , INLINEFORM4 is a set of possible answers and INLINEFORM5 is the ground-truth answer. Both INLINEFORM6 and INLINEFORM7 are sequences of words from vocabulary INLINEFORM8 . We also assume that all possible answers are words from the vocabulary, that is INLINEFORM9 . In the CBT and CNN/Daily Mail datasets it is also true that the ground-truth answer INLINEFORM10 appears in the document. This is exploited by many machine learning models BIBREF2 , BIBREF4 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF10 , BIBREF11 , BIBREF12 , however some do not explicitly depend on this property BIBREF1 , BIBREF3 , BIBREF5 , BIBREF9 "
],
[
"We will now briefly review what datasets for text comprehension have been published up to date and look at models which have been recently applied to solving the task we have just described."
],
[
"A crucial condition for applying deep-learning techniques is to have a huge amount of data available for training. For question answering this specifically means having a large number of document-question-answer triples available. While there is an unlimited amount of text available, coming up with relevant questions and the corresponding answers can be extremely labour-intensive if done by human annotators. There were efforts to provide such human-generated datasets, e.g. Microsoft's MCTest BIBREF17 , however their scale is not suitable for deep learning without pre-training on other data BIBREF18 (such as using pre-trained word embedding vectors).",
"Google DeepMind managed to avoid this scale issue with their way of generating document-question-answer triples automatically, closely followed by Facebook with a similar method. Let us now briefly introduce the two resulting datasets whose properties are summarized in Table TABREF8 .",
"These two datasets BIBREF1 exploit a useful feature of online news articles – many articles include a short summarizing sentence near the top of the page. Since all information in the summary sentence is also presented in the article body, we get a nice cloze-style question about the article contents by removing a word from the short summary.",
"The dataset's authors also replaced all named entities in the dataset by anonymous tokens which are further shuffled for each new batch. This forces the model to rely solely on information from the context document, not being able to transfer any meaning of the named entities between documents.",
"This restricts the task to one specific aspect of context-dependent question answering which may be useful however it moves the task further from the real application scenario, where we would like the model to use all information available to answer questions. Furthermore Chen et al. BIBREF5 have suggested that this can make about 17% of the questions unanswerable even by humans. They also claim that more than a half of the question sentences are mere paraphrases or exact matches of a single sentence from the context document. This raises a question to what extent the dataset can test deeper understanding of the articles.",
"The Children's Book Test BIBREF2 uses a different source - books freely available thanks to Project Gutenberg. Since no summary is available, each example consists of a context document formed from 20 consecutive sentences from the story together with a question formed from the subsequent sentence.",
"The dataset comes in four flavours depending on what type of word is omitted from the question sentence. Based on human evaluation done in BIBREF2 it seems that NE and CN are more context dependent than the other two types – prepositions and verbs. Therefore we (and all of the recent publications) focus only on these two word types.",
"Several new datasets related to the (now almost standard) ones above emerged recently. We will now briefly present them and explain how the dataset we are introducing in this article differs from them.",
"The LAMBADA dataset BIBREF19 is designed to measure progress in understanding common-sense questions about short stories that can be easily answered by humans but cannot be answered by current standard machine-learning models (e.g. plain LSTM language models). This dataset is useful for measuring the gap between humans and machine learning algorithms. However, by contrast to our BookTest dataset, it will not allow us to track progress towards the performance of the baseline systems or on examples where machine learning may show super-human performance. Also LAMBADA is just a diagnostic dataset and does not provide ready-to-use question-answering training data, just a plain-text corpus which may moreover include copyrighted books making its use potentially problematic for some purposes. We are providing ready training data consisting of copyright-free books only.",
"The SQuAD dataset BIBREF20 based on Wikipedia and the Who-did-What dataset BIBREF21 based on Gigaword news articles are factoid question-answering datasets where a multi-word answer should be extracted from a context document. This is in contrast to the previous datasets, including CNN/DM, CBT, LAMBADA and our new dataset, which require only single-word answers. Both these datasets however provide less than 130,000 training questions, two orders of magnitude less than our dataset does.",
"The Story Cloze Test BIBREF22 provides a crowd-sourced corpus of 49,255 commonsense stories for training and 3,744 testing stories with right and wrong endings. Hence the dataset is again rather small. Similarly to LAMBADA, the Story Cloze Test was designed to be easily answerable by humans.",
"In the WikiReading BIBREF23 dataset the context document is formed from a Wikipedia article and the question-answer pair is taken from the corresponding WikiData page. For each entity (e.g. Hillary Clinton), WikiData contain a number of property-value pairs (e.g. place of birth: Chicago) which form the datasets's question-answer pairs. The dataset is certainly relevant to the community, however the questions are of very limited variety with only 20 properties (and hence unique questions) covering INLINEFORM0 of the dataset. Furthermore many of the frequent properties are mentioned at a set spot within the article (e.g. the date of birth is almost always in brackets behind the name of a person) which may make the task easier for machines. We are trying to provide a more varied dataset.",
"Although there are several datasets related to task we are aiming to solve, they differ sufficiently for our dataset to bring new value to the community. Its biggest advantage is its size which can furthermore be easily upscaled without expensive human annotation. Finally while we are emphasizing the differences, models could certainly benefit from as diverse a collection of datasets as possible."
],
[
"A first major work applying deep-learning techniques to text comprehension was Hermann et al. BIBREF1 . This work was followed by the application of Memory Networks to the same task BIBREF2 . Later three models emerged around the same time BIBREF3 , BIBREF4 , BIBREF5 including our psr model BIBREF4 . The AS Reader inspired several subsequent models that use it as a sub-component in a diverse ensemble BIBREF8 ; extend it with a hierarchical structure BIBREF6 , BIBREF24 , BIBREF7 ; compute attention over the context document for every word in the query BIBREF10 or use two-way context-query attention mechanism for every word in the context and the query BIBREF11 that is similar in its spirit to models recently proposed in different domains, e.g. BIBREF25 in information retrieval. Other neural approaches to text comprehension are explored in BIBREF9 , BIBREF12 ."
],
[
"Accuracy in any machine learning tasks can be enhanced either by improving a machine learning model or by using more in-domain training data. Current state of the art models BIBREF6 , BIBREF7 , BIBREF8 , BIBREF11 improve over AS Reader's accuracy on CBT NE and CN datasets by 1-2 percent absolute. This suggests that with current techniques there is only limited room for improvement on the algorithmic side.",
"The other possibility to improve performance is simply to use more training data. The importance of training data was highlighted by the frequently quoted Mercer's statement that “There is no data like more data.” The observation that having more data is often more important than having better algorithms has been frequently stressed since then BIBREF13 , BIBREF14 .",
"As a step in the direction of exploiting the potential of more data in the domain of text comprehension, we created a new dataset called BookTest similar to, but much larger than the widely used CBT and CNN/DM datasets."
],
[
"Similarly to the CBT, our BookTest dataset is derived from books available through project Gutenberg. We used 3555 copyright-free books to extract CN examples and 10507 books for NE examples, for comparison the CBT dataset was extracted from just 108 books.",
"When creating our dataset we follow the same procedure as was used to create the CBT dataset BIBREF2 . That is, we detect whether each sentence contains either a named entity or a common noun that already appeared in one of the preceding twenty sentences. This word is then replaced by a gap tag (XXXXX) in this sentence which is hence turned into a cloze-style question. The preceding 20 sentences are used as the context document. For common noun and named entity detection we use the Stanford POS tagger BIBREF27 and Stanford NER BIBREF28 .",
"The training dataset consists of the original CBT NE and CN data extended with new NE and CN examples. The new BookTest dataset hence contains INLINEFORM0 training examples and INLINEFORM1 tokens.",
"The validation dataset consists of INLINEFORM0 NE and INLINEFORM1 CN questions. We have one test set for NE and one for CN, each containing INLINEFORM2 examples. The training, validation and test sets were generated from non-overlapping sets of books.",
"When generating the dataset we removed all editions of books used to create CBT validation and test sets from our training dataset. Therefore the models trained on the BookTest corpus can be evaluated on the original CBT data and they can be compared with recent text-comprehension models utilizing this dataset BIBREF2 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 ."
],
[
"We will now use our psr model to evaluate the performance gain from increasing the dataset size."
],
[
"In BIBREF4 we introduced the psr , which at the time of publication significantly outperformed all other architectures on the CNN, DM and CBT datasets. This model is built to leverage the fact that the answer is a single word from the context document. Similarly to many other models it uses attention over the document – intuitively a measure of how relevant each word is to answering the question. However while most previous models used this attention as weights to calculate a blended representation of the answer word, we simply sum the attention across all occurrences of each unique words and then simply select the word with the highest sum as the final answer. While simple, this trick seems both to improve accuracy and to speed-up training. It was adopted by many subsequent models BIBREF8 , BIBREF6 , BIBREF7 , BIBREF10 , BIBREF11 , BIBREF24 .",
"Let us now describe the model in more detail. Figure FIGREF21 may help you in understanding the following paragraphs.",
"The words from the document and the question are first converted into vector embeddings using a look-up matrix INLINEFORM0 . The document is then read by a bidirectional GRU network BIBREF29 . A concatenation of the hidden states of the forward and backward GRUs at each word is then used as a contextual embedding of this word, intuitively representing the context in which the word is appearing. We can also understand it as representing the set of questions to which this word may be an answer.",
"Similarly the question is read by a bidirectional GRU but in this case only the final hidden states are concatenated to form the question embedding.",
"The attention over each word in the context is then calculated as the dot product of its contextual embedding with the question embedding. This attention is then normalized by the softmax function and summed across all occurrences of each answer candidate. The candidate with most accumulated attention is selected as the final answer.",
"For a more detailed description of the model including equations check BIBREF4 . More details about the training setup and model hyperparameters can be found in the Appendix.",
"During our past experiments on the CNN, DM and CBT datasets BIBREF4 each unique word from the training, validation and test datasets had its row in the look-up matrix INLINEFORM0 . However as we radically increased the dataset size, this would result in an extremely large number of model parameters so we decided to limit the vocabulary size to INLINEFORM1 most frequent words. For each example, each unique out-of-vocabulary word is now mapped on one of 1000 anonymous tokens which are randomly initialized and untrained. Fixing the embeddings of these anonymous tags proved to significantly improve the performance.",
"While mostly using the original AS Reader model, we have also tried introducing a minor tweak in some instances of the model. We tried initializing the context encoder GRU's hidden state by letting the encoder read the question first before proceeding to read the context document. Intuitively this allows the encoder to know in advance what to look for when reading over the context document.",
"Including models of this kind in the ensemble helped to improve the performance."
],
[
"Table TABREF25 shows the accuracy of the psr and other architectures on the CBT validation and test data. The last two rows show the performance of the psr trained on the BookTest dataset; all the other models have been trained on the original CBT training data.",
"If we take the best psr ensemble trained on CBT as a baseline, improving the model architecture as in BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , continuing to use the original CBT training data, lead to improvements of INLINEFORM0 and INLINEFORM1 absolute on named entities and common nouns respectively. By contrast, inflating the training dataset provided a boost of INLINEFORM2 while using the same model. The ensemble of our models even exceeded the human baseline provided by Facebook BIBREF2 on the Common Noun dataset.",
"Our model takes approximately two weeks to converge when trained on the BookTest dataset on a single Nvidia Tesla K40 GPU."
],
[
"Embracing the abundance of data may mean focusing on other aspects of system design than with smaller data. Here are some of the challenges that we need to face in this situation.",
"Firstly, since the amount of data is practically unlimited – we could even generate them on the fly resulting in continuous learning similar to the Never-Ending Language Learning by Carnegie Mellon University BIBREF30 – it is now the speed of training that determines how much data the model is able to see. Since more training data significantly help the model performance, focusing on speeding up the algorithm may be more important than ever before. This may for instance influence the decision whether to use regularization such as dropout which does seem to somewhat improve the model performance, however usually at a cost of slowing down training.",
"Thanks to its simplicity, the psr seems to be training fast - for example around seven times faster than the models proposed by Chen et al. BIBREF5 . Hence the psr may be particularly suitable for training on large datasets.",
"The second challenge is how to generalize the performance gains from large data to a specific target domain. While there are huge amounts of natural language data in general, it may not be the case in the domain where we may want to ultimately apply our model.",
"Hence we are usually not facing a scenario of simply using a larger amount of the same training data, but rather extending training to a related domain of data, hoping that some of what the model learns on the new data will still help it on the original task.",
"This is highlighted by our observations from applying a model trained on the BookTest to Children's Book Test test data. If we move model training from joint CBT NE+CN training data to a subset of the BookTest of the same size (230k examples), we see a drop in accuracy of around 10% on the CBT test datasets.",
"Hence even though the Children's Book Test and BookTest datasets are almost as close as two disjoint datasets can get, the transfer is still very imperfect . Rightly choosing data to augment the in-domain training data is certainly a problem worth exploring in future work.",
"Our results show that given enough data the AS Reader was able to exceed the human performance on CBT CN reported by Facebook. However we hypothesized that the system is still not achieving its full potential so we decided to examine the room for improvement in our own small human study."
],
[
"After adding more data we have the performance on the CBT validation and test datasets soaring. However is there still potential for much further growth beyond the results we have observed?",
"We decided to explore the remaining space for improvement on the CBT by testing humans on a random subset of 50 named entity and 50 common noun validation questions that the psr ensemble could not answer correctly. These questions were answered by 10 non-native English speakers from our research laboratory, each on a disjoint subset of questions.. Participants had unlimited time to answer the questions and were told that these questions were not correctly answered by a machine, providing additional motivation to prove they are better than computers. The results of the human study are summarized in Table TABREF28 . They show that a majority of questions that our system could not answer so far are in fact answerable. This suggests that 1) the original human baselines might have been underestimated, however, it might also be the case that there are some examples that can be answered by machines and not by humans; 2) there is still space for improvement.",
"A system that would answer correctly every time when either our ensemble or human answered correctly would achieve accuracy over 92% percent on both validation and test NE datasets and over 96% on both CN datasets. Hence it still makes sense to use CBT dataset to study further improvements of text-comprehension systems."
],
[
"Few ways of improving model performance are as solidly established as using more training data. Yet we believe this principle has been somewhat neglected by recent research in text comprehension. While there is a practically unlimited amount of data available in this field, most research was performed on unnecessarily small datasets.",
"As a gentle reminder to the community we have shown that simply infusing a model with more data can yield performance improvements of up to INLINEFORM0 where several attempts to improve the model architecture on the same training data have given gains of at most INLINEFORM1 compared to our best ensemble result. Yes, experiments on small datasets certainly can bring useful insights. However we believe that the community should also embrace the real-world scenario of data abundance.",
"The BookTest dataset we are proposing gives the reading-comprehension community an opportunity to make a step in that direction."
],
[
"The training details are similar to those in BIBREF4 however we are including them here for completeness.",
"To train the model we used stochastic gradient descent with the ADAM update rule BIBREF32 and learning rates of INLINEFORM0 , INLINEFORM1 and INLINEFORM2 . The best learning rate in our experiments was INLINEFORM3 . We minimized negative log-likelihood as the training objective.",
"The initial weights in the word-embedding matrix were drawn randomly uniformly from the interval INLINEFORM0 . Weights in the GRU networks were initialized by random orthogonal matrices BIBREF34 and biases were initialized to zero. We also used a gradient clipping BIBREF33 threshold of 10 and batches of sizes between 32 or 256. Increasing the batch from 32 to 128 seems to significantly improve performance on the large dataset - something we did not observe on the original CBT data. Increasing the batch size much above 128 is currently difficult due to memory constraints of the GPU.",
"During training we randomly shuffled all examples at the beginning of each epoch. To speed up training, we always pre-fetched 10 batches worth of examples and sorted them according to document length. Hence each batch contained documents of roughly the same length.",
"We also did not use pre-trained word embeddings.",
"We did not perform any text pre-processing since the datasets were already tokenized.",
"During training we evaluated the model performance every 12 hours and at the end of each epoch and stopped training when the error on the 20k BookTest validation set started increasing. We explored the hyperparameter space by training 67 different models The region of the parameter space that we explored together with the parameters of the model with best validation accuracy are summarized in Table TABREF29 .",
"Our model was implemented using Theano BIBREF31 and Blocks BIBREF35 .",
"The ensembles were formed by simply averaging the predictions from the constituent single models. These single models were selected using the following algorithm.",
"We started with the best performing model according to validation performance. Then in each step we tried adding the best performing model that had not been previously tried. We kept it in the ensemble if it did improve its validation performance and discarded it otherwise. This way we gradually tried each model once. We call the resulting model a greedy ensemble. We used the INLINEFORM0 BookTest validation dataset for this procedure.",
"The algorithm was offered 10 models and selected 5 of them for the final ensemble."
]
],
"section_name": [
"Introduction",
"Task Description",
"Current Landscape",
"Datasets",
"Machine Learning Models",
"Possible Directions for Improvements",
"BookTest",
"Baselines",
"AS Reader",
"Results",
"Discussion",
"Human Study",
"Conclusion",
"Training Details"
]
} | {
"answers": [
{
"annotation_id": [
"02dd9a461b813df2cafdcdc8dacdae784d8da24d"
],
"answer": [
{
"evidence": [
"The ensembles were formed by simply averaging the predictions from the constituent single models. These single models were selected using the following algorithm."
],
"extractive_spans": [
"simply averaging the predictions from the constituent single models"
],
"free_form_answer": "",
"highlighted_evidence": [
"The ensembles were formed by simply averaging the predictions from the constituent single models."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"aea9125e6378a6c1f7ba7ae55741a33e5c3d9f61",
"c19e60b43b1afc03d521d995adcbf3a44f4fc460"
],
"answer": [
{
"evidence": [
"If we take the best psr ensemble trained on CBT as a baseline, improving the model architecture as in BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , continuing to use the original CBT training data, lead to improvements of INLINEFORM0 and INLINEFORM1 absolute on named entities and common nouns respectively. By contrast, inflating the training dataset provided a boost of INLINEFORM2 while using the same model. The ensemble of our models even exceeded the human baseline provided by Facebook BIBREF2 on the Common Noun dataset."
],
"extractive_spans": [
"INLINEFORM2 "
],
"free_form_answer": "",
"highlighted_evidence": [
"If we take the best psr ensemble trained on CBT as a baseline, improving the model architecture as in BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , continuing to use the original CBT training data, lead to improvements of INLINEFORM0 and INLINEFORM1 absolute on named entities and common nouns respectively. By contrast, inflating the training dataset provided a boost of INLINEFORM2 while using the same model."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Table TABREF25 shows the accuracy of the psr and other architectures on the CBT validation and test data. The last two rows show the performance of the psr trained on the BookTest dataset; all the other models have been trained on the original CBT training data."
],
"extractive_spans": [],
"free_form_answer": "Answer with content missing: (Table 2) Accuracy of best AS reader results including ensembles are 78.4 and 83.7 when trained on BookTest compared to 71.0 and 68.9 when trained on CBT for Named endity and Common noun respectively.",
"highlighted_evidence": [
"Table TABREF25 shows the accuracy of the psr and other architectures on the CBT validation and test data. The last two rows show the performance of the psr trained on the BookTest dataset; all the other models have been trained on the original CBT training data."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"53f981ffefe55f490e54cf09ad378c1adf561aa4",
"8fc69eff5de9a6132faa7dfedb648e8a13e62ccf"
],
"answer": [
{
"evidence": [
"We decided to explore the remaining space for improvement on the CBT by testing humans on a random subset of 50 named entity and 50 common noun validation questions that the psr ensemble could not answer correctly. These questions were answered by 10 non-native English speakers from our research laboratory, each on a disjoint subset of questions.. Participants had unlimited time to answer the questions and were told that these questions were not correctly answered by a machine, providing additional motivation to prove they are better than computers. The results of the human study are summarized in Table TABREF28 . They show that a majority of questions that our system could not answer so far are in fact answerable. This suggests that 1) the original human baselines might have been underestimated, however, it might also be the case that there are some examples that can be answered by machines and not by humans; 2) there is still space for improvement."
],
"extractive_spans": [
" by testing humans on a random subset of 50 named entity and 50 common noun validation questions that the psr ensemble could not answer correctly"
],
"free_form_answer": "",
"highlighted_evidence": [
"We decided to explore the remaining space for improvement on the CBT by testing humans on a random subset of 50 named entity and 50 common noun validation questions that the psr ensemble could not answer correctly."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We decided to explore the remaining space for improvement on the CBT by testing humans on a random subset of 50 named entity and 50 common noun validation questions that the psr ensemble could not answer correctly. These questions were answered by 10 non-native English speakers from our research laboratory, each on a disjoint subset of questions.. Participants had unlimited time to answer the questions and were told that these questions were not correctly answered by a machine, providing additional motivation to prove they are better than computers. The results of the human study are summarized in Table TABREF28 . They show that a majority of questions that our system could not answer so far are in fact answerable. This suggests that 1) the original human baselines might have been underestimated, however, it might also be the case that there are some examples that can be answered by machines and not by humans; 2) there is still space for improvement."
],
"extractive_spans": [
"majority of questions that our system could not answer so far are in fact answerable"
],
"free_form_answer": "",
"highlighted_evidence": [
"They show that a majority of questions that our system could not answer so far are in fact answerable. This suggests that 1) the original human baselines might have been underestimated, however, it might also be the case that there are some examples that can be answered by machines and not by humans; 2) there is still space for improvement."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"How does their ensemble method work?",
"How large are the improvements of the Attention-Sum Reader model when using the BookTest dataset?",
"How do they show there is space for further improvement?"
],
"question_id": [
"09c86ef78e567033b725fc56b85c5d2602c1a7c3",
"d67c01d9b689c052045f3de1b0918bab18c3f174",
"e5bc73974c79d96eee2b688e578a9de1d0eb38fd"
],
"question_writer": [
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Table 1: Statistics on the 4 standard text comprehension datasets and our new BookTest dataset introduced in this paper. CBT CN stands for CBT Common Nouns and CBT NE stands for CBT Named Entites. Statistics were taken from (Hermann et al., 2015) and the statistics provided with the CBT data set.",
"Figure 1: Structure of the AS Reader model.",
"Table 2: Results of various architectures on the CBT test datasets.",
"Table 3: Accuracy of human participants on examples from validation set that were previously incorrectly answered by our system trained only on the BookTest data.",
"Table 4: The best hyperparameters and range of values that we tested on the BookTest train dataset. We report number of hidden units of the unidirectional GRU; the bidirectional GRU has twice as many hidden units."
],
"file": [
"3-Table1-1.png",
"6-Figure1-1.png",
"7-Table2-1.png",
"8-Table3-1.png",
"10-Table4-1.png"
]
} | [
"How large are the improvements of the Attention-Sum Reader model when using the BookTest dataset?"
] | [
[
"1610.00956-Results-0",
"1610.00956-Results-1"
]
] | [
"Answer with content missing: (Table 2) Accuracy of best AS reader results including ensembles are 78.4 and 83.7 when trained on BookTest compared to 71.0 and 68.9 when trained on CBT for Named endity and Common noun respectively."
] | 172 |
1601.02403 | Argumentation Mining in User-Generated Web Discourse | The goal of argumentation mining, an evolving research field in computational linguistics, is to design methods capable of analyzing people's argumentation. In this article, we go beyond the state of the art in several ways. (i) We deal with actual Web data and take up the challenges given by the variety of registers, multiple domains, and unrestricted noisy user-generated Web discourse. (ii) We bridge the gap between normative argumentation theories and argumentation phenomena encountered in actual data by adapting an argumentation model tested in an extensive annotation study. (iii) We create a new gold standard corpus (90k tokens in 340 documents) and experiment with several machine learning methods to identify argument components. We offer the data, source codes, and annotation guidelines to the community under free licenses. Our findings show that argumentation mining in user-generated Web discourse is a feasible but challenging task. | {
"paragraphs": [
[
"The art of argumentation has been studied since the early work of Aristotle, dating back to the 4th century BC BIBREF0 . It has been exhaustively examined from different perspectives, such as philosophy, psychology, communication studies, cognitive science, formal and informal logic, linguistics, computer science, educational research, and many others. In a recent and critically well-acclaimed study, Mercier.Sperber.2011 even claim that argumentation is what drives humans to perform reasoning. From the pragmatic perspective, argumentation can be seen as a verbal activity oriented towards the realization of a goal BIBREF1 or more in detail as a verbal, social, and rational activity aimed at convincing a reasonable critic of the acceptability of a standpoint by putting forward a constellation of one or more propositions to justify this standpoint BIBREF2 .",
"Analyzing argumentation from the computational linguistics point of view has very recently led to a new field called argumentation mining BIBREF3 . Despite the lack of an exact definition, researchers within this field usually focus on analyzing discourse on the pragmatics level and applying a certain argumentation theory to model and analyze textual data at hand.",
"Our motivation for argumentation mining stems from a practical information seeking perspective from the user-generated content on the Web. For example, when users search for information in user-generated Web content to facilitate their personal decision making related to controversial topics, they lack tools to overcome the current information overload. One particular use-case example dealing with a forum post discussing private versus public schools is shown in Figure FIGREF4 . Here, the lengthy text on the left-hand side is transformed into an argument gist on the right-hand side by (i) analyzing argument components and (ii) summarizing their content. Figure FIGREF5 shows another use-case example, in which users search for reasons that underpin certain standpoint in a given controversy (which is homeschooling in this case). In general, the output of automatic argument analysis performed on the large scale in Web data can provide users with analyzed arguments to a given topic of interest, find the evidence for the given controversial standpoint, or help to reveal flaws in argumentation of others.",
"Satisfying the above-mentioned information needs cannot be directly tackled by current methods for, e.g., opinion mining, questions answering, or summarization and requires novel approaches within the argumentation mining field. Although user-generated Web content has already been considered in argumentation mining, many limitations and research gaps can be identified in the existing works. First, the scope of the current approaches is restricted to a particular domain or register, e.g., hotel reviews BIBREF5 , Tweets related to local riot events BIBREF6 , student essays BIBREF7 , airline passenger rights and consumer protection BIBREF8 , or renewable energy sources BIBREF9 . Second, not all the related works are tightly connected to argumentation theories, resulting into a gap between the substantial research in argumentation itself and its adaptation in NLP applications. Third, as an emerging research area, argumentation mining still suffers from a lack of labeled corpora, which is crucial for designing, training, and evaluating the algorithms. Although some works have dealt with creating new data sets, the reliability (in terms of inter-annotator agreement) of the annotated resources is often unknown BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 .",
"Annotating and automatically analyzing arguments in unconstrained user-generated Web discourse represent challenging tasks. So far, the research in argumentation mining “has been conducted on domains like news articles, parliamentary records and legal documents, where the documents contain well-formed explicit arguments, i.e., propositions with supporting reasons and evidence present in the text” BIBREF8 . [p. 50]Boltuzic.Snajder.2014 point out that “unlike in debates or other more formal argumentation sources, the arguments provided by the users, if any, are less formal, ambiguous, vague, implicit, or often simply poorly worded.” Another challenge stems from the different nature of argumentation theories and computational linguistics. Whereas computational linguistics is mainly descriptive, the empirical research that is carried out in argumentation theories does not constitute a test of the theoretical model that is favored, because the model of argumentation is a normative instrument for assessing the argumentation BIBREF15 . So far, no fully fledged descriptive argumentation theory based on empirical research has been developed, thus feasibility of adapting argumentation models to the Web discourse represents an open issue.",
"These challenges can be formulated into the following research questions:",
"In this article, we push the boundaries of the argumentation mining field by focusing on several novel aspects. We tackle the above-mentioned research questions as well as the previously discussed challenges and issues. First, we target user-generated Web discourse from several domains across various registers, to examine how argumentation is communicated in different contexts. Second, we bridge the gap between argumentation theories and argumentation mining through selecting the argumenation model based on research into argumentation theories and related fields in communication studies or psychology. In particular, we adapt normative models from argumentation theory to perform empirical research in NLP and support our application of argumentation theories with an in-depth reliability study. Finally, we use state-of-the-art NLP techniques in order to build robust computational models for analyzing arguments that are capable of dealing with a variety of genres on the Web."
],
[
"We create a new corpus which is, to the best of our knowledge, the largest corpus that has been annotated within the argumentation mining field to date. We choose several target domains from educational controversies, such as homeschooling, single-sex education, or mainstreaming. A novel aspect of the corpus is its coverage of different registers of user-generated Web content, such as comments to articles, discussion forum posts, blog posts, as well as professional newswire articles.",
"Since the data come from a variety of sources and no assumptions about its actual content with respect to argumentation can be drawn, we conduct two extensive annotation studies. In the first study, we tackle the problem of relatively high “noise” in the retrieved data. In particular, not all of the documents are related to the given topics in a way that makes them candidates for further deep analysis of argumentation (this study results into 990 annotated documents). In the second study, we discuss the selection of an appropriate argumentation model based on evidence in argumentation research and propose a model that is suitable for analyzing micro-level argumention in user-generated Web content. Using this model, we annotate 340 documents (approx. 90,000 tokens), reaching a substantial inter-annotator agreement. We provide a hand-analysis of all the phenomena typical to argumentation that are prevalent in our data. These findings may also serve as empirical evidence to issues that are on the spot of current argumentation research.",
"From the computational perspective, we experiment on the annotated data using various machine learning methods in order to extract argument structure from documents. We propose several novel feature sets and identify configurations that run best in in-domain and cross-domain scenarios. To foster research in the community, we provide the annotated data as well as all the experimental software under free license.",
"The rest of the article is structured as follows. First, we provide an essential background in argumentation theory in section SECREF2 . Section SECREF3 surveys related work in several areas. Then we introduce the dataset and two annotation studies in section SECREF4 . Section SECREF5 presents our experimental work and discusses the results and errors and section SECREF6 concludes this article."
],
[
"Let us first present some definitions of the term argumentation itself. [p. 3]Ketcham.1917 defines argumentation as “the art of persuading others to think or act in a definite way. It includes all writing and speaking which is persuasive in form.” According to MacEwan.1898, “argumentation is the process of proving or disproving a proposition. Its purpose is to induce a new belief, to establish truth or combat error in the mind of another.” [p. 2]Freeley.Steinberg.2008 narrow the scope of argumentation to “reason giving in communicative situations by people whose purpose is the justification of acts, beliefs, attitudes, and values.” Although these definitions vary, the purpose of argumentation remains the same – to persuade others.",
"We would like to stress that our perception of argumentation goes beyond somehow limited giving reasons BIBREF17 , BIBREF18 . Rather, we see the goal of argumentation as to persuade BIBREF19 , BIBREF20 , BIBREF21 . Persuasion can be defined as a successful intentional effort at influencing another's mental state through communication in a circumstance in which the persuadee has some measure of freedom BIBREF22 , although, as OKeefe2011 points out, there is no correct or universally-endorsed definition of either `persuasion' or `argumentation'. However, broader understanding of argumentation as a means of persuasion allows us to take into account not only reasoned discourse, but also non-reasoned mechanisms of influence, such as emotional appeals BIBREF23 .",
"Having an argument as a product within the argumentation process, we should now define it. One typical definition is that an argument is a claim supported by reasons BIBREF24 . The term claim has been used since 1950's, introduced by Toulmin.1958, and in argumentation theory it is a synonym for standpoint or point of view. It refers to what is an issue in the sense what is being argued about. The presence of a standpoint is thus crucial for argumentation analysis. However, the claim as well as other parts of the argument might be implicit; this is known as enthymematic argumentation, which is rather usual in ordinary argumentative discourse BIBREF25 .",
"One fundamental problem with the definition and formal description of arguments and argumentation is that there is no agreement even among argumentation theorists. As [p. 29]vanEmeren.et.al.2014 admit in their very recent and exhaustive survey of the field, ”as yet, there is no unitary theory of argumentation that encompasses the logical, dialectical, and rhetorical dimensions of argumentation and is universally accepted. The current state of the art in argumentation theory is characterized by the coexistence of a variety of theoretical perspectives and approaches, which differ considerably from each other in conceptualization, scope, and theoretical refinement.”"
],
[
"Despite the missing consensus on the ultimate argumentation theory, various argumentation models have been proposed that capture argumentation on different levels. Argumentation models abstract from the language level to a concept level that stresses the links between the different components of an argument or how arguments relate to each other BIBREF26 . Bentahar.et.al.2010 propose a taxonomy of argumentation models, that is horizontally divided into three categories – micro-level models, macro-level models, and rhetorical models.",
"In this article, we deal with argumentation on the micro-level (also called argumentation as a product or monological models). Micro-level argumentation focuses on the structure of a single argument. By contrast, macro-level models (also called dialogical models) and rhetorical models highlight the process of argumentation in a dialogue BIBREF27 . In other words, we examine the structure of a single argument produced by a single author in term of its components, not the relations that can exist among arguments and their authors in time. A detailed discussion of these different perspectives can be found, e.g., in BIBREF28 , BIBREF29 , BIBREF30 , BIBREF1 , BIBREF31 , BIBREF32 ."
],
[
"The above-mentioned models focus basically only on one dimension of the argument, namely the logos dimension. According to the classical Aristotle's theory BIBREF0 , argument can exist in three dimensions, which are logos, pathos, and ethos. Logos dimension represents a proof by reason, an attempt to persuade by establishing a logical argument. For example, syllogism belongs to this argumentation dimension BIBREF34 , BIBREF25 . Pathos dimension makes use of appealing to emotions of the receiver and impacts its cognition BIBREF35 . Ethos dimension of argument relies on the credibility of the arguer. This distinction will have practical impact later in section SECREF51 which deals with argumentation on the Web."
],
[
"We conclude the theoretical section by presenting one (micro-level) argumentation model in detail – a widely used conceptual model of argumentation introduced by Toulmin.1958, which we will henceforth denote as the Toulmin's original model. This model will play an important role later in the annotation studies (section SECREF51 ) and experimental work (section SECREF108 ). The model consists of six parts, referred as argument components, where each component plays a distinct role.",
"is an assertion put forward publicly for general acceptance BIBREF38 or the conclusion we seek to establish by our arguments BIBREF17 .",
"It is the evidence to establish the foundation of the claim BIBREF24 or, as simply put by Toulmin, “the data represent what we have to go on.” BIBREF37 . The name of this concept was later changed to grounds in BIBREF38 .",
"The role of warrant is to justify a logical inference from the grounds to the claim.",
"is a set of information that stands behind the warrant, it assures its trustworthiness.",
"limits the degree of certainty under which the argument should be accepted. It is the degree of force which the grounds confer on the claim in virtue of the warrant BIBREF37 .",
"presents a situation in which the claim might be defeated.",
"A schema of the Toulmin's original model is shown in Figure FIGREF29 . The lines and arrows symbolize implicit relations between the components. An example of an argument rendered using the Toulmin's scheme can be seen in Figure FIGREF30 .",
"We believe that this theoretical overview should provide sufficient background for the argumentation mining research covered in this article; for further references, we recommend for example BIBREF15 ."
],
[
"We structure the related work into three sub-categories, namely argumentation mining, stance detection, and persuasion and on-line dialogs, as these areas are closest to this article's focus. For a recent overview of general discourse analysis see BIBREF39 . Apart from these, research on computer-supported argumentation has been also very active; see, e.g., BIBREF40 for a survey of various models and argumentation formalisms from the educational perspective or BIBREF41 which examines argumentation in the Semantic Web."
],
[
"The argumentation mining field has been evolving very rapidly in the recent years, resulting into several workshops co-located with major NLP conferences. We first present related works with a focus on annotations and then review experiments with classifying argument components, schemes, or relations.",
"One of the first papers dealing with annotating argumentative discourse was Argumentative Zoning for scientific publications BIBREF42 . Later, Teufel.et.al.2009 extended the original 7 categories to 15 and annotated 39 articles from two domains, where each sentence is assigned a category. The obtained Fleiss' INLINEFORM0 was 0.71 and 0.65. In their approach, they tried to deliberately ignore the domain knowledge and rely only on general, rhetorical and logical aspect of the annotated texts. By contrast to our work, argumentative zoning is specific to scientific publications and has been developed solely for that task.",
"Reed.Rowe.2004 presented Araucaria, a tool for argumentation diagramming which supports both convergent and linked arguments, missing premises (enthymemes), and refutations. They also released the AracuariaDB corpus which has later been used for experiments in the argumentation mining field. However, the creation of the dataset in terms of annotation guidelines and reliability is not reported – these limitations as well as its rather small size have been identified BIBREF10 .",
"Biran.Rambow.2011 identified justifications for subjective claims in blog threads and Wikipedia talk pages. The data were annotated with claims and their justifications reaching INLINEFORM0 0.69, but a detailed description of the annotation approach was missing.",
"[p. 1078]Schneider.et.al.2013b annotated Wikipedia talk pages about deletion using 17 Walton's schemes BIBREF43 , reaching a moderate agreement (Cohen's INLINEFORM0 0.48) and concluded that their analysis technique can be reused, although “it is intensive and difficult to apply.”",
"Stab.Gurevych.2014 annotated 90 argumentative essays (about 30k tokens), annotating claims, major claims, and premises and their relations (support, attack). They reached Krippendorff's INLINEFORM0 0.72 for argument components and Krippendorff's INLINEFORM1 0.81 for relations between components.",
"Rosenthal2012 annotated sentences that are opinionated claims, in which the author expresses a belief that should be adopted by others. Two annotators labeled sentences as claims without any context and achieved Cohen's INLINEFORM0 0.50 (2,000 sentences from LiveJournal) and 0.56 (2,000 sentences from Wikipedia).",
"Aharoni.et.al.2014 performed an annotation study in order to find context-dependent claims and three types of context-dependent evidence in Wikipedia, that were related to 33 controversial topics. The claim and evidence were annotated in 104 articles. The average Cohen's INLINEFORM0 between a group of 20 expert annotators was 0.40. Compared to our work, the linguistic properties of Wikipedia are qualitatively different from other user-generated content, such as blogs or user comments BIBREF44 .",
"Wacholder.et.al.2014 annotated “argument discourse units” in blog posts and criticized the Krippendorff's INLINEFORM0 measure. They proposed a new inter-annotator metric by taking the most overlapping part of one annotation as the “core” and all annotations as a “cluster”. The data were extended by Ghosh2014, who annotated “targets” and “callouts” on the top of the units.",
"Park.Cardie.2014 annotated about 10k sentences from 1,047 documents into four types of argument propositions with Cohen's INLINEFORM0 0.73 on 30% of the dataset. Only 7% of the sentences were found to be non-argumentative.",
"Faulkner2014 used Amazon Mechanical Turk to annotate 8,179 sentences from student essays. Three annotators decided whether the given sentence offered reasons for or against the main prompt of the essay (or no reason at all; 66% of the sentences were found to be neutral and easy to identify). The achieved Cohen's INLINEFORM0 was 0.70.",
"The research has also been active on non-English datasets. Goudas.et.al.2014 focused on user-generated Greek texts. They selected 204 documents and manually annotated sentences that contained an argument (760 out of 16,000). They distinguished claims and premises, but the claims were always implicit. However, the annotation agreement was not reported, neither was the number of annotators or the guidelines. A study on annotation of arguments was conducted by Peldszus.Stede.2013, who evaluate agreement among 26 “naive\" annotators (annotators with very little training). They manually constructed 23 German short texts, each of them contains exactly one central claim, two premises, and one objection (rebuttal or undercut) and analyzed annotator agreement on this artificial data set. Peldszus.2014 later achieved higher inter-rater agreement with expert annotators on an extended version of the same data. Kluge.2014 built a corpus of argumentative German Web documents, containing 79 documents from 7 educational topics, which were annotated by 3 annotators according to the claim-premise argumentation model. The corpus comprises 70,000 tokens and the inter-annotator agreement was 0.40 (Krippendorff's INLINEFORM0 ). Houy.et.al.2013 targeted argumentation mining of German legal cases.",
"Table TABREF33 gives an overview of annotation studies with their respective argumentation model, domain, size, and agreement. It also contains other studies outside of computational linguistics and few proposals and position papers.",
"Arguments in the legal domain were targeted in BIBREF11 . Using argumentation formalism inspired by Walton.2012, they employed multinomial Naive Bayes classifier and maximum entropy model for classifying argumentative sentences on the AraucariaDB corpus BIBREF45 . The same test dataset was used by Feng.Hirst.2011, who utilized the C4.5 decision classifier. Rooney.et.al.2012 investigated the use of convolution kernel methods for classifying whether a sentence belongs to an argumentative element or not using the same corpus.",
"Stab.Gurevych.2014b classified sentences to four categories (none, major claim, claim, premise) using their previously annotated corpus BIBREF7 and reached 0.72 macro- INLINEFORM0 score. In contrast to our work, their documents are expected to comply with a certain structure of argumentative essays and are assumed to always contain argumentation.",
"Biran.Rambow.2011 identified justifications on the sentence level using a naive Bayes classifier over a feature set based on statistics from the RST Treebank, namely n-grams which were manually processed by deleting n-grams that “seemed irrelevant, ambiguous or domain-specific.”",
"Llewellyn2014 experimented with classifying tweets into several argumentative categories, namely claims and counter-claims (with and without evidence) and verification inquiries previously annotated by Procter.et.al.2013. They used unigrams, punctuations, and POS as features in three classifiers.",
"Park.Cardie.2014 classified propositions into three classes (unverifiable, verifiable non-experimental, and verifiable experimental) and ignored non-argumentative texts. Using multi-class SVM and a wide range of features (n-grams, POS, sentiment clue words, tense, person) they achieved Macro INLINEFORM0 0.69.",
"Peldszus.2014 experimented with a rather complex labeling schema of argument segments, but their data were artificially created for their task and manually cleaned, such as removing segments that did not meet the criteria or non-argumentative segments.",
"In the first step of their two-phase approach, Goudas.et.al.2014 sampled the dataset to be balanced and identified argumentative sentences with INLINEFORM0 0.77 using the maximum entropy classifier. For identifying premises, they used BIO encoding of tokens and achieved INLINEFORM1 score 0.42 using CRFs.",
"Saint-Dizier.2012 developed a Prolog engine using a lexicon of 1300 words and a set of 78 hand-crafted rules with the focus on a particular argument structure “reasons supporting conclusions” in French.",
"Taking the dialogical perspective, Cabrio.Villata.2012 built upon an argumentation framework proposed by Dung.1995 which models arguments within a graph structure and provides a reasoning mechanism for resolving accepted arguments. For identifying support and attack, they relied on existing research on textual entailment BIBREF46 , namely using the off-the-shelf EDITS system. The test data were taken from a debate portal Debatepedia and covered 19 topics. Evaluation was performed in terms of measuring the acceptance of the “main argument\" using the automatically recognized entailments, yielding INLINEFORM0 score about 0.75. By contrast to our work which deals with micro-level argumentation, the Dung's model is an abstract framework intended to model dialogical argumentation.",
"Finding a bridge between existing discourse research and argumentation has been targeted by several researchers. Peldszus2013a surveyed literature on argumentation and proposed utilization of Rhetorical Structure Theory (RST) BIBREF47 . They claimed that RST is by its design well-suited for studying argumentative texts, but an empirical evidence has not yet been provided. Penn Discourse Tree Bank (PDTB) BIBREF48 relations have been under examination by argumentation mining researchers too. Cabrio2013b examined a connection between five Walton's schemes and discourse markers in PDTB, however an empirical evaluation is missing."
],
[
"Research related to argumentation mining also involves stance detection. In this case, the whole document (discussion post, article) is assumed to represent the writer's standpoint to the discussed topic. Since the topic is stated as a controversial question, the author is either for or against it.",
"Somasundaran.Wiebe.2009 built a computational model for recognizing stances in dual-topic debates about named entities in the electronic products domain by combining preferences learned from the Web data and discourse markers from PDTB BIBREF48 . Hasan.Ng.2013 determined stance in on-line ideological debates on four topics using data from createdebate.com, employing supervised machine learning and features ranging from n-grams to semantic frames. Predicting stance of posts in Debatepedia as well as external articles using a probabilistic graphical model was presented in BIBREF49 . This approach also employed sentiment lexicons and Named Entity Recognition as a preprocessing step and achieved accuracy about 0.80 in binary prediction of stances in debate posts.",
"Recent research has involved joint modeling, taking into account information about the users, the dialog sequences, and others. Hasan.Ng.2012 proposed machine learning approach to debate stance classification by leveraging contextual information and author's stances towards the topic. Qiu.et.al.2013 introduced a computational debate side model to cluster posts or users by sides for general threaded discussions using a generative graphical model employing words from various subjectivity lexicons as well as all adjectives and adverbs in the posts. Qiu.Jiang.2013 proposed a graphical model for viewpoint discovery in discussion threads. Burfoot.et.al.2011 exploited the informal citation structure in U.S. Congressional floor-debate transcripts and use a collective classification which outperforms methods that consider documents in isolation.",
"Some works also utilize argumentation-motivated features. Park.et.al.2011 dealt with contentious issues in Korean newswire discourse. Although they annotate the documents with “argument frames”, the formalism remains unexplained and does not refer to any existing research in argumentation. Walker.et.al.2012b incorporated features with some limited aspects of the argument structure, such as cue words signaling rhetorical relations between posts, POS generalized dependencies, and a representation of the parent post (context) to improve stance classification over 14 topics from convinceme.net."
],
[
"Another stream of research has been devoted to persuasion in online media, which we consider as a more general research topic than argumentation.",
"Schlosser.2011 investigated persuasiveness of online reviews and concluded that presenting two sides is not always more helpful and can even be less persuasive than presenting one side. Mohammadi.et.al.2013 explored persuasiveness of speakers in YouTube videos and concluded that people are perceived more persuasive in video than in audio and text. Miceli.et.al.2006 proposed a computational model that attempts to integrate emotional and non-emotional persuasion. In the study of Murphy.2001, persuasiveness was assigned to 21 articles (out of 100 manually preselected) and four of them are later analyzed in detail for comparing the perception of persuasion between expert and students. Bernard.et.al.2012 experimented with children's perception of discourse connectives (namely with “because”) to link statements in arguments and found out that 4- and 5-years-old and adults are sensitive to the connectives. Le.2004 presented a study of persuasive texts and argumentation in newspaper editorials in French.",
"A coarse-grained view on dialogs in social media was examined by Bracewell.et.al.2013, who proposed a set of 15 social acts (such as agreement, disagreement, or supportive behavior) to infer the social goals of dialog participants and presented a semi-supervised model for their classification. Their social act types were inspired by research in psychology and organizational behavior and were motivated by work in dialog understanding. They annotated a corpus in three languages using in-house annotators and achieved INLINEFORM0 in the range from 0.13 to 0.53.",
"Georgila.et.al.2011 focused on cross-cultural aspects of persuasion or argumentation dialogs. They developed a novel annotation scheme stemming from different literature sources on negotiation and argumentation as well as from their original analysis of the phenomena. The annotation scheme is claimed to cover three dimensions of an utterance, namely speech act, topic, and response or reference to a previous utterance. They annotated 21 dialogs and reached Krippendorff's INLINEFORM0 between 0.38 and 0.57.",
"Given the broad landscape of various approaches to argument analysis and persuasion studies presented in this section, we would like to stress some novel aspects of the current article. First, we aim at adapting a model of argument based on research by argumentation scholars, both theoretical and empirical. We pose several pragmatical constraints, such as register independence (generalization over several registers). Second, our emphasis is put on reliable annotations and sufficient data size (about 90k tokens). Third, we deal with fairly unrestricted Web-based sources, so additional steps of distinguishing whether the texts are argumentative are required. Argumentation mining has been a rapidly evolving field with several major venues in 2015. We encourage readers to consult an upcoming survey article by Lippi.Torroni.2016 or the proceedings of the 2nd Argumentation Mining workshop BIBREF50 to keep up with recent developments. However, to the best of our knowledge, the main findings of this article have not yet been made obsolete by any related work."
],
[
"This section describes the process of data selection, annotation, curation, and evaluation with the goal of creating a new corpus suitable for argumentation mining research in the area of computational linguistics. As argumentation mining is an evolving discipline without established and widely-accepted annotation schemes, procedures, and evaluation, we want to keep this overview detailed to ensure full reproducibility of our approach. Given the wide range of perspectives on argumentation itself BIBREF15 , variety of argumentation models BIBREF27 , and high costs of discourse or pragmatic annotations BIBREF48 , creating a new, reliable corpus for argumentation mining represents a substantial effort.",
"A motivation for creating a new corpus stems from the various use-cases discussed in the introduction, as well as some research gaps pointed in section SECREF1 and further discussed in the survey in section SECREF31 (e.g., domain restrictions, missing connection to argumentation theories, non-reported reliability or detailed schemes)."
],
[
"As a main field of interest in the current study, we chose controversies in education. One distinguishing feature of educational topics is their breadth of sub-topics and points of view, as they attract researchers, practitioners, parents, students, or policy-makers. We assume that this diversity leads to the linguistic variability of the education topics and thus represents a challenge for NLP. In a cooperation with researchers from the German Institute for International Educational Research we identified the following current controversial topics in education in English-speaking countries: (1) homeschooling, (2) public versus private schools, (3) redshirting — intentionally delaying the entry of an age-eligible child into kindergarten, allowing their child more time to mature emotionally and physically BIBREF51 , (4) prayer in schools — whether prayer in schools should be allowed and taken as a part of education or banned completely, (5) single-sex education — single-sex classes (males and females separate) versus mixed-sex classes (“co-ed”), and (6) mainstreaming — including children with special needs into regular classes.",
"Since we were also interested in whether argumentation differs across registers, we included four different registers — namely (1) user comments to newswire articles or to blog posts, (2) posts in discussion forums (forum posts), (3) blog posts, and (4) newswire articles. Throughout this work, we will refer to each article, blog post, comment, or forum posts as a document. This variety of sources covers mainly user-generated content except newswire articles which are written by professionals and undergo an editing procedure by the publisher. Since many publishers also host blog-like sections on their portals, we consider as blog posts all content that is hosted on personal blogs or clearly belong to a blog category within a newswire portal."
],
[
"Given the six controversial topics and four different registers, we compiled a collection of plain-text documents, which we call the raw corpus. It contains 694,110 tokens in 5,444 documents. As a coarse-grained analysis of the data, we examined the lengths and the number of paragraphs (see Figure FIGREF43 ). Comments and forum posts follow a similar distribution, being shorter than 300 tokens on average. By contrast, articles and blogs are longer than 400 tokens and have 9.2 paragraphs on average. The process of compiling the raw corpus and its further statistics are described in detail in Appendix UID158 ."
],
[
"The goal of this study was to select documents suitable for a fine-grained analysis of arguments. In a preliminary study on annotating argumentation using a small sample (50 random documents) of forum posts and comments from the raw corpus, we found that many documents convey no argumentation at all, even in discussions about controversies. We observed that such contributions do not intend to persuade; these documents typically contain story-sharing, personal worries, user interaction (asking questions, expressing agreement), off-topic comments, and others. Such characteristics are typical to on-line discussions in general, but they have not been examined with respect to argumentation or persuasion. Indeed, we observed that there are (1) documents that are completely unrelated and (2) documents that are related to the topic, but do not contain any argumentation. This issue has been identified among argumentation theorist; for example as external relevance by Paglieri.Castelfranchia.2014. Similar findings were also confirmed in related literature in argumentation mining, however never tackled empirically BIBREF53 , BIBREF8 These documents are thus not suitable for analyzing argumentation.",
"In order to filter documents that are suitable for argumentation annotation, we defined a binary document-level classification task. The distinction is made between either persuasive documents or non-persuasive (which includes all other sorts of texts, such as off-topic, story sharing, unrelated dialog acts, etc.).",
"The two annotated categories were on-topic persuasive and non-persuasive. Three annotators with near-native English proficiency annotated a set of 990 documents (a random subset of comments and forum posts) reaching 0.59 Fleiss' INLINEFORM0 . The final label was selected by majority voting. The annotation study took on average of 15 hours per annotator with approximately 55 annotated documents per hour. The resulting labels were derived by majority voting. Out of 990 documents, 524 (53%) were labeled as on-topic persuasive. We will refer to this corpus as gold data persuasive.",
"We examined all disagreements between annotators and discovered some typical problems, such as implicitness or topic relevance. First, the authors often express their stance towards the topic implicitly, so it must be inferred by the reader. To do so, certain common-ground knowledge is required. However, such knowledge heavily depends on many aspects, such as the reader's familiarity with the topic or her cultural background, as well as the context of the source website or the discussion forum thread. This also applies for sarcasm and irony. Second, the decision whether a particular topic is persuasive was always made with respect to the controversial topic under examination. Some authors shift the focus to a particular aspect of the given controversy or a related issue, making the document less relevant.",
"We achieved moderate agreement between the annotators, although the definition of persuasiveness annotation might seem a bit fuzzy. We found different amounts of persuasion in the specific topics. For instance, prayer in schools or private vs. public schools attract persuasive discourse, while other discussed controversies often contain non-persuasive discussions, represented by redshirting and mainstreaming. Although these two topics are also highly controversial, the participants of on-line discussions seem to not attempt to persuade but they rather exchange information, support others in their decisions, etc. This was also confirmed by socio-psychological researchers. Ammari.et.al.2014 show that parents of children with special needs rely on discussion sites for accessing information and social support and that, in particular, posts containing humor, achievement, or treatment suggestions are perceived to be more socially appropriate than posts containing judgment, violence, or social comparisons. According to Nicholson.Leask.2012, in the online forum, parents of autistic children were seen to understand the issue because they had lived it. Assuming that participants in discussions related to young kids (e.g., redshirting, or mainstreaming) are usually females (mothers), the gender can also play a role. In a study of online persuasion, Guadagno.Cialdini.2002 conclude that women chose to bond rather than compete (women feel more comfortable cooperating, even in a competitive environment), whereas men are motivated to compete if necessary to achieve independence."
],
[
"The goal of this study was to annotate documents on a detailed level with respect to an argumentation model. First, we will present the annotation scheme. Second, we will describe the annotation process. Finally, we will evaluate the agreement and draw some conclusions.",
"Given the theoretical background briefly introduced in section SECREF2 , we motivate our selection of the argumentation model by the following requirements. First, the scope of this work is to capture argumentation within a single document, thus focusing on micro-level models. Second, there should exist empirical evidence that such a model has been used for analyzing argumentation in previous works, so it is likely to be suitable for our purposes of argumentative discourse analysis in user-generated content. Regarding the first requirement, two typical examples of micro-level models are the Toulmin's model BIBREF36 and Walton's schemes BIBREF55 . Let us now elaborate on the second requirement.",
"Walton's argumentation schemes are claimed to be general and domain independent. Nevertheless, evidence from the computational linguistics field shows that the schemes lack coverage for analyzing real argumentation in natural language texts. In examining real-world political argumentation from BIBREF56 , Walton.2012 found out that 37.1% of the arguments collected did not fit any of the fourteen schemes they chose so they created new schemes ad-hoc. Cabrio2013b selected five argumentation schemes from Walton and map these patterns to discourse relation categories in the Penn Discourse TreeBank (PDTB) BIBREF48 , but later they had to define two new argumentation schemes that they discovered in PDTB. Similarly, Song.et.al.2014 admitted that the schemes are ambiguous and hard to directly apply for annotation, therefore they modified the schemes and created new ones that matched the data.",
"Although Macagno.Konstantinidou.2012 show several examples of two argumentation schemes applied to few selected arguments in classroom experiments, empirical evidence presented by Anthony.Kim.2014 reveals many practical and theoretical difficulties of annotating dialogues with schemes in classroom deliberation, providing many details on the arbitrary selection of the sub-set of the schemes, the ambiguity of the scheme definitions, concluding that the presence of the authors during the experiment was essential for inferring and identifying the argument schemes BIBREF57 .",
"Although this model (refer to section SECREF21 ) was designed to be applicable to real-life argumentation, there are numerous studies criticizing both the clarity of the model definition and the differentiation between elements of the model. Ball1994 claims that the model can be used only for the most simple arguments and fails on the complex ones. Also Freeman1991 and other argumentation theorists criticize the usefulness of Toulmin's framework for the description of real-life argumentative texts. However, others have advocated the model and claimed that it can be applied to the people's ordinary argumenation BIBREF58 , BIBREF59 .",
"A number of studies (outside the field of computational linguistics) used Toulmin's model as their backbone argumentation framework. Chambliss1995 experimented with analyzing 20 written documents in a classroom setting in order to find the argument patterns and parts. Simosi2003 examined employees' argumentation to resolve conflicts. Voss2006 analyzed experts' protocols dealing with problem-solving.",
"The model has also been used in research on computer-supported collaborative learning. Erduran2004 adapt Toulmin's model for coding classroom argumentative discourse among teachers and students. Stegmann2011 builds on a simplified Toulmin's model for scripted construction of argument in computer-supported collaborative learning. Garcia-Mila2013 coded utterances into categories from Toulmin's model in persuasion and consensus-reaching among students. Weinberger.Fischer.2006 analyze asynchronous discussion boards in which learners engage in an argumentative discourse with the goal to acquire knowledge. For coding the argument dimension, they created a set of argumentative moves based on Toulmin's model. Given this empirical evidence, we decided to build upon the Toulmin's model.",
"In this annotation task, a sequence of tokens (e.g. a phrase, a sentence, or any arbitrary text span) is labeled with a corresponding argument component (such as the claim, the grounds, and others). There are no explicit relations between these annotation spans as the relations are implicitly encoded in the pragmatic function of the components in the Toulmin's model.",
"In order to prove the suitability of the Toulmin's model, we analyzed 40 random documents from the gold data persuasive dataset using the original Toulmin's model as presented in section SECREF21 . We took into account sever criteria for assessment, such as frequency of occurrence of the components or their importance for the task. We proposed some modifications of the model based on the following observations.",
"Authors do not state the degree of cogency (the probability of their claim, as proposed by Toulmin). Thus we omitted qualifier from the model due to its absence in the data.",
"The warrant as a logical explanation why one should accept the claim given the evidence is almost never stated. As pointed out by BIBREF37 , “data are appealed to explicitly, warrants implicitly.” This observation has also been made by Voss2006. Also, according to [p. 205]Eemeren.et.al.1987, the distinction of warrant is perfectly clear only in Toulmin’s examples, but the definitions fail in practice. We omitted warrant from the model.",
"Rebuttal is a statement that attacks the claim, thus playing a role of an opposing view. In reality, the authors often attack the presented rebuttals by another counter-rebuttal in order to keep the whole argument's position consistent. Thus we introduced a new component – refutation – which is used for attacking the rebuttal. Annotation of refutation was conditioned of explicit presence of rebuttal and enforced by the annotation guidelines. The chain rebuttal–refutation is also known as the procatalepsis figure in rhetoric, in which the speaker raises an objection to his own argument and then immediately answers it. By doing so, the speaker hopes to strengthen the argument by dealing with possible counter-arguments before the audience can raise them BIBREF43 .",
"The claim of the argument should always reflect the main standpoint with respect to the discussed controversy. We observed that this standpoint is not always explicitly expressed, but remains implicit and must be inferred by the reader. Therefore, we allow the claim to be implicit. In such a case, the annotators must explicitly write down the (inferred) stance of the author.",
"By definition, the Toulmin's model is intended to model single argument, with the claim in its center. However, we observed in our data, that some authors elaborate on both sides of the controversy equally and put forward an argument for each side (by argument here we mean the claim and its premises, backings, etc.). Therefore we allow multiple arguments to be annotated in one document. At the same time, we restrained the annotators from creating complex argument hierarchies.",
"Toulmin's grounds have an equivalent role to a premise in the classical view on an argument BIBREF15 , BIBREF60 in terms that they offer the reasons why one should accept the standpoint expressed by the claim. As this terminology has been used in several related works in the argumentation mining field BIBREF7 , BIBREF61 , BIBREF62 , BIBREF11 , we will keep this convention and denote the grounds as premises.",
"One of the main critiques of the original Toulmin's model was the vague distinction between grounds, warrant, and backing BIBREF63 , BIBREF64 , BIBREF65 . The role of backing is to give additional support to the warrant, but there is no warrant in our model anymore. However, what we observed during the analysis, was a presence of some additional evidence. Such evidence does not play the role of the grounds (premises) as it is not meant as a reason supporting the claim, but it also does not explain the reasoning, thus is not a warrant either. It usually supports the whole argument and is stated by the author as a certain fact. Therefore, we extended the scope of backing as an additional support to the whole argument.",
"The annotators were instructed to distinguish between premises and backing, so that premises should cover generally applicable reasons for the claim, whereas backing is a single personal experience or statements that give credibility or attribute certain expertise to the author. As a sanity check, the argument should still make sense after removing backing (would be only considered “weaker”).",
"We call the model as a modified Toulmin's model. It contains five argument components, namely claim, premise, backing, rebuttal, and refutation. When annotating a document, any arbitrary token span can be labeled with an argument component; the components do not overlap. The spans are not known in advance and the annotator thus chooses the span and the component type at the same time. All components are optional (they do not have to be present in the argument) except the claim, which is either explicit or implicit (see above). If a token span is not labeled by any argument component, it is not considered as a part of the argument and is later denoted as none (this category is not assigned by the annotators).",
"An example analysis of a forum post is shown in Figure FIGREF65 . Figure FIGREF66 then shows a diagram of the analysis from that example (the content of the argument components was shortened or rephrased).",
"The annotation experiment was split into three phases. All documents were annotated by three independent annotators, who participated in two training sessions. During the first phase, 50 random comments and forum posts were annotated. Problematic cases were resolved after discussion and the guidelines were refined. In the second phase, we wanted to extend the range of annotated registers, so we selected 148 comments and forum posts as well as 41 blog posts. After the second phase, the annotation guidelines were final.",
"In the final phase, we extended the range of annotated registers and added newswire articles from the raw corpus in order to test whether the annotation guidelines (and inherently the model) is general enough. Therefore we selected 96 comments/forum posts, 8 blog posts, and 8 articles for this phase. A detailed inter-annotator agreement study on documents from this final phase will be reported in section UID75 .",
"The annotations were very time-consuming. In total, each annotator spent 35 hours by annotating in the course of five weeks. Discussions and consolidation of the gold data took another 6 hours. Comments and forum posts required on average of 4 minutes per document to annotate, while blog posts and articles on average of 14 minutes per document. Examples of annotated documents from the gold data are listed in Appendix UID158 .",
"We discarded 11 documents out of the total 351 annotated documents. Five forum posts, although annotated as persuasive in the first annotation study, were at a deeper look a mixture of two or more posts with missing quotations, therefore unsuitable for analyzing argumentation. Three blog posts and two articles were found not to be argumentative (the authors took no stance to the discussed controversy) and one article was an interview, which the current model cannot capture (a dialogical argumentation model would be required).",
"For each of the 340 documents, the gold standard annotations were obtained using the majority vote. If simple majority voting was not possible (different boundaries of the argument component together with a different component label), the gold standard was set after discussion among the annotators. We will refer to this corpus as the gold standard Toulmin corpus. The distribution of topics and registers in this corpus in shown in Table TABREF71 , and Table TABREF72 presents some lexical statistics.",
"Based on pre-studies, we set the minimal unit for annotation as token. The documents were pre-segmented using the Stanford Core NLP sentence splitter BIBREF69 embedded in the DKPro Core framework BIBREF70 . Annotators were asked to stick to the sentence level by default and label entire pre-segmented sentences. They should switch to annotations on the token level only if (a) a particular sentence contained more than one argument component, or (b) if the automatic sentence segmentation was wrong. Given the “noise” in user-generated Web data (wrong or missing punctuation, casing, etc.), this was often the case.",
"Annotators were also asked to rephrase (summarize) each annotated argument component into a simple statement when applicable, as shown in Figure FIGREF66 . This was used as a first sanity checking step, as each argument component is expected to be a coherent discourse unit. For example, if a particular occurrence of a premise cannot be summarized/rephrased into one statement, this may require further splitting into two or more premises.",
"For the actual annotations, we developed a custom-made web-based application that allowed users to switch between different granularity of argument components (tokens or sentences), to annotate the same document in different argument “dimensions” (logos and pathos), and to write summary for each annotated argument component.",
"As a measure of annotation reliability, we rely on Krippendorff's unitized alpha ( INLINEFORM0 ) BIBREF71 . To the best of our knowledge, this is the only agreement measure that is applicable when both labels and boundaries of segments are to be annotated.",
"Although the measure has been used in related annotation works BIBREF61 , BIBREF7 , BIBREF72 , there is one important detail that has not been properly communicated. The INLINEFORM0 is computed over a continuum of the smallest units, such as tokens. This continuum corresponds to a single document in the original Krippendorff's work. However, there are two possible extensions to multiple documents (a corpus), namely (a) to compute INLINEFORM1 for each document first and then report an average value, or (b) to concatenate all documents into one large continuum and compute INLINEFORM2 over it. The first approach with averaging yielded extremely high the standard deviation of INLINEFORM3 (i.e., avg. = 0.253; std. dev. = 0.886; median = 0.476 for the claim). This says that some documents are easy to annotate while others are harder, but interpretation of such averaged value has no evidence either in BIBREF71 or other papers based upon it. Thus we use the other methodology and treat the whole corpus as a single long continuum (which yields in the example of claim 0.541 INLINEFORM4 ).",
"Table TABREF77 shows the inter-annotator agreement as measured on documents from the last annotation phase (see section UID67 ). The overall INLINEFORM0 for all register types, topics, and argument components is 0.48 in the logos dimension (annotated with the modified Toulmin's model). Such agreement can be considered as moderate by the measures proposed by Landis.Koch.1977, however, direct interpretation of the agreement value lacks consensus BIBREF54 . Similar inter-annotator agreement numbers were achieved in the relevant works in argumentation mining (refer to Table TABREF33 in section SECREF31 ; although most of the numbers are not directly comparable, as different inter-annotator metrics were used on different tasks).",
"There is a huge difference in INLINEFORM0 regarding the registers between comments + forums posts ( INLINEFORM1 0.60, Table TABREF77 a) and articles + blog posts ( INLINEFORM2 0.09, Table TABREF77 b) in the logos dimension. If we break down the value with respect to the individual argument components, the agreement on claim and premise is substantial in the case of comments and forum posts (0.59 and 0.69, respectively). By contrast, these argument components were annotated only with a fair agreement in articles and blog posts (0.22 and 0.24, respectively).",
"As can be also observed from Table TABREF77 , the annotation agreement in the logos dimension varies regarding the document topic. While it is substantial/moderate for prayer in schools (0.68) or private vs. public schools (0.44), for some topics it remains rather slight, such as in the case of redshirting (0.14) or mainstreaming (0.08).",
"First, we examine the disagreement in annotations by posing the following research question: are there any measurable properties of the annotated documents that might systematically cause low inter-annotator agreement? We use Pearson's correlation coefficient between INLINEFORM0 on each document and the particular property under investigation. We investigated the following set of measures.",
"Full sentence coverage ratio represents a ratio of argument component boundaries that are aligned to sentence boundaries. The value is 1.0 if all annotations in the particular document are aligned to sentences and 0.0 if no annotations match the sentence boundaries. Our hypothesis was that automatic segmentation to sentences was often incorrect, therefore annotators had to switch to the token level annotations and this might have increased disagreement on boundaries of the argument components.",
"Document length, paragraph length and average sentence length. Our hypotheses was that the length of documents, paragraphs, or sentences negatively affects the agreement.",
"Readability measures. We tested four standard readability measures, namely Ari BIBREF73 , Coleman-Liau BIBREF74 , Flesch BIBREF75 , and Lix BIBREF76 to find out whether readability of the documents plays any role in annotation agreement.",
"Correlation results are listed in Table TABREF82 . We observed the following statistically significant ( INLINEFORM0 ) correlations. First, document length negatively correlates with agreement in comments. The longer the comment was the lower the agreement was. Second, average paragraph length negatively correlates with agreement in blog posts. The longer the paragraphs in blogs were, the lower agreement was reached. Third, all readability scores negatively correlate with agreement in the public vs. private school domain, meaning that the more complicated the text in terms of readability is, the lower agreement was reached. We observed no significant correlation in sentence coverage and average sentence length measures. We cannot draw any general conclusion from these results, but we can state that some registers and topics, given their properties, are more challenging to annotate than others.",
"Another qualitative analysis of disagreements between annotators was performed by constructing a probabilistic confusion matrix BIBREF77 on the token level. The biggest disagreements, as can be seen in Table TABREF85 , is caused by rebuttal and refutation confused with none (0.27 and 0.40, respectively). This is another sign that these two argument components were very hard to annotate. As shown in Table TABREF77 , the INLINEFORM5 was also low – 0.08 for rebuttal and 0.17 for refutation.",
"We analyzed the annotations and found the following phenomena that usually caused disagreements between annotators.",
"Each argument component (e.g., premise or backing) should express one consistent and coherent piece of information, for example a single reason in case of the premise (see Section UID73 ). However, the decision whether a longer text should be kept as a single argument component or segmented into multiple components is subjective and highly text-specific.",
"While rhetorical questions have been researched extensively in linguistics BIBREF78 , BIBREF79 , BIBREF80 , BIBREF81 , their role in argumentation represents a substantial research question BIBREF82 , BIBREF83 , BIBREF84 , BIBREF85 , BIBREF86 . Teninbaum.2011 provides a brief history of rhetorical questions in persuasion. In short, rhetorical questions should provoke the reader. From the perspective of our argumentation model, rhetorical questions might fall both into the logos dimension (and thus be labeled as, e.g., claim, premise, etc.) or into the pathos dimension (refer to Section SECREF20 ). Again, the decision is usually not clear-cut.",
"As introduced in section UID55 , rebuttal attacks the claim by presenting an opponent's view. In most cases, the rebuttal is again attacked by the author using refutation. From the pragmatic perspective, refutation thus supports the author's stance expressed by the claim. Therefore, it can be easily confused with premises, as the function of both is to provide support for the claim. Refutation thus only takes place if it is meant as a reaction to the rebuttal. It follows the discussed matter and contradicts it. Such a discourse is usually expressed as:",
"[claim: My claim.] [rebuttal: On the other hand, some people claim XXX which makes my claim wrong.] [refutation: But this is not true, because of YYY.]",
"However, the author might also take the following defensible approach to formulate the argument:",
"[rebuttal: Some people claim XXX-1 which makes my claim wrong.] [refutation: But this is not true, because of YYY-1.] [rebuttal: Some people claim XXX-2 which makes my claim wrong.] [refutation: But this is not true, because of YYY-2.] [claim: Therefore my claim.]",
"If this argument is formulated without stating the rebuttals, it would be equivalent to the following:",
"[premise: YYY-1.] [premise: YYY-2.] [claim: Therefore my claim.]",
"This example shows that rebuttal and refutation represent a rhetorical device to produce arguments, but the distinction between refutation and premise is context-dependent and on the functional level both premise and refutation have very similar role – to support the author's standpoint. Although introducing dialogical moves into monological model and its practical consequences, as described above, can be seen as a shortcoming of our model, this rhetoric figure has been identified by argumentation researchers as procatalepsis BIBREF43 . A broader view on incorporating opposing views (or lack thereof) is discussed under the term confirmation bias by BIBREF21 who claim that “[...] people are trying to convince others. They are typically looking for arguments and evidence to confirm their own claim, and ignoring negative arguments and evidence unless they anticipate having to rebut them.” The dialectical attack of possible counter-arguments may thus strengthen one's own argument.",
"One possible solution would be to refrain from capturing this phenomena completely and to simplify the model to claims and premises, for instance. However, the following example would then miss an important piece of information, as the last two clauses would be left un-annotated. At the same time, annotating the last clause as premise would be misleading, because it does not support the claim (in fact, it supports it only indirectly by attacking the rebuttal; this can be seen as a support is considered as an admissible extension of abstract argument graph by BIBREF87 ).",
"Doc#422 (forumpost, homeschooling) [claim: I try not to be anti-homeschooling, but... it's just hard for me.] [premise: I really haven't met any homeschoolers who turned out quite right, including myself.] I apologize if what I'm saying offends any of you - that's not my intention, [rebuttal: I know that there are many homeschooled children who do just fine,] but [refutation: that hasn't been my experience.]",
"To the best of our knowledge, these context-dependent dialogical properties of argument components using Toulmin's model have not been solved in the literature on argumentation theory and we suggest that these observations should be taken into account in the future research in monological argumentation.",
"Appeal to emotion, sarcasm, irony, or jokes are common in argumentation in user-generated Web content. We also observed documents in our data that were purely sarcastic (the pathos dimension), therefore logical analysis of the argument (the logos dimension) would make no sense. However, given the structure of such documents, some claims or premises might be also identified. Such an argument is a typical example of fallacious argumentation, which intentionally pretends to present a valid argument, but its persuasion is conveyed purely for example by appealing to emotions of the reader BIBREF88 .",
"We present some statistics of the annotated data that are important from the argumentation research perspective. Regardless of the register, 48% of claims are implicit. This means that the authors assume that their standpoint towards the discussed controversy can be inferred by the reader and give only reasons for that standpoint. Also, explicit claims are mainly written just once, only in 3% of the documents the claim was rephrased and occurred multiple times.",
"In 6% of the documents, the reasons for an implicit claim are given only in the pathos dimension, making the argument purely persuasive without logical argumentation.",
"The “myside bias”, defined as a bias against information supporting another side of an argument BIBREF89 , BIBREF90 , can be observed by the presence of rebuttals to the author's claim or by formulating arguments for both sides when the overall stance is neutral. While 85% of the documents do not consider any opposing side, only 8% documents present a rebuttal, which is then attacked by refutation in 4% of the documents. Multiple rebuttals and refutations were found in 3% of the documents. Only 4% of the documents were overall neutral and presented arguments for both sides, mainly in blog posts.",
"We were also interested whether mitigating linguistic devices are employed in the annotated arguments, namely in their main stance-taking components, the claims. Such devices typically include parenthetical verbs, syntactic constructions, token agreements, hedges, challenge questions, discourse markers, and tag questions, among others BIBREF91 . In particular, [p. 1]Kaltenbock.et.al.2010 define hedging as a discourse strategy that reduces the force or truth of an utterance and thus reduces the risk a speaker runs when uttering a strong or firm assertion or other speech act. We manually examined the use of hedging in the annotated claims.",
"Our main observation is that hedging is used differently across topics. For instance, about 30-35% of claims in homeschooling and mainstreaming signal the lack of a full commitment to the expressed stance, in contrast to prayer in schools (15%) or public vs. private schools (about 10%). Typical hedging cues include speculations and modality (“If I have kids, I will probably homeschool them.”), statements as neutral observations (“It's not wrong to hold the opinion that in general it's better for kids to go to school than to be homeschooled.”), or weasel phrases BIBREF92 (“In some cases, inclusion can work fantastically well.”, “For the majority of the children in the school, mainstream would not have been a suitable placement.”).",
"On the other hand, most claims that are used for instance in the prayer in schools arguments are very direct, without trying to diminish its commitment to the conveyed belief (for example, “NO PRAYER IN SCHOOLS!... period.”, “Get it out of public schools”, “Pray at home.”, or “No organized prayers or services anywhere on public school board property - FOR ANYONE.”). Moreover, some claims are clearly offensive, persuading by direct imperative clauses towards the opponents/audience (“TAKE YOUR KIDS PRIVATE IF YOU CARE AS I DID”, “Run, don't walk, to the nearest private school.”) or even accuse the opponents for taking a certain stance (“You are a bad person if you send your children to private school.”).",
"These observations are consistent with the findings from the first annotation study on persuasion (see section UID48 ), namely that some topics attract heated argumentation where participant take very clear and reserved standpoints (such as prayer in schools or private vs. public schools), while discussions about other topics are rather milder. It has been shown that the choices a speaker makes to express a position are informed by their social and cultural background, as well as their ability to speak the language BIBREF93 , BIBREF94 , BIBREF91 . However, given the uncontrolled settings of the user-generated Web content, we cannot infer any similar conclusions in this respect.",
"We investigated premises across all topics in order to find the type of support used in the argument. We followed the approach of Park.Cardie.2014, who distinguished three types of propositions in their study, namely unverifiable, verifiable non-experiential, and verifiable experiential.",
"Verifiable non-experiential and verifiable experiential propositions, unlike unverifiable propositions, contain an objective assertion, where objective means “expressing or dealing with facts or conditions as perceived without distortion by personal feelings, prejudices, or interpretations.” Such assertions have truth values that can be proved or disproved with objective evidence; the correctness of the assertion or the availability of the objective evidence does not matter BIBREF8 . A verifiable proposition can further be distinguished as experiential or not, depending on whether the proposition is about the writer's personal state or experience or something non-experiential. Verifiable experiential propositions are sometimes referred to as anectotal evidence, provide the novel knowledge that readers are seeking BIBREF8 .",
"Table TABREF97 shows the distribution of the premise types with examples for each topic from the annotated corpus. As can be seen in the first row, arguments in prayer in schools contain majority (73%) of unverifiable premises. Closer examination reveals that their content vary from general vague propositions to obvious fallacies, such as a hasty generalization, straw men, or slippery slope. As Nieminen.Mustonen.2014 found out, fallacies are very common in argumentation about religion-related issues. On the other side of the spectrum, arguments about redshirting rely mostly on anecdotal evidence (61% of verifiable experiential propositions). We will discuss the phenomena of narratives in argumentation in more detail later in section UID98 . All the topics except private vs. public schools exhibit similar amount of verifiable non-experiential premises (9%–22%), usually referring to expert studies or facts. However, this type of premises has usually the lowest frequency.",
"Manually analyzing argumentative discourse and reconstructing (annotating) the underlying argument structure and its components is difficult. As [p. 267]Reed2006 point out, “the analysis of arguments is often hard, not only for students, but for experts too.” According to [p. 81]Harrell.2011b, argumentation is a skill and “even for simple arguments, untrained college students can identify the conclusion but without prompting are poor at both identifying the premises and how the premises support the conclusion.” [p. 81]Harrell.2011 further claims that “a wide literature supports the contention that the particular skills of understanding, evaluating, and producing arguments are generally poor in the population of people who have not had specific training and that specific training is what improves these skills.” Some studies, for example, show that students perform significantly better on reasoning tasks when they have learned to identify premises and conclusions BIBREF95 or have learned some standard argumentation norms BIBREF96 .",
"One particular extra challenge in analyzing argumentation in Web user-generated discourse is that the authors produce their texts probably without any existing argumentation theory or model in mind. We assume that argumentation or persuasion is inherent when users discuss controversial topics, but the true reasons why people participate in on-line communities and what drives their behavior is another research question BIBREF97 , BIBREF98 , BIBREF99 , BIBREF100 . When the analyzed texts have a clear intention to produce argumentative discourse, such as in argumentative essays BIBREF7 , the argumentation is much more explicit and a substantially higher inter-annotator agreement can be achieved.",
"The model seems to be suitable for short persuasive documents, such as comments and forum posts. Its applicability to longer documents, such as articles or blog posts, is problematic for several reasons.",
"The argument components of the (modified) Toulmin's model and their roles are not expressive enough to capture argumentation that not only conveys the logical structure (in terms of reasons put forward to support the claim), but also relies heavily on the rhetorical power. This involves various stylistic devices, pervading narratives, direct and indirect speech, or interviews. While in some cases the argument components are easily recognizable, the vast majority of the discourse in articles and blog posts does not correspond to any distinguishable argumentative function in the logos dimension. As the purpose of such discourse relates more to rhetoric than to argumentation, unambiguous analysis of such phenomena goes beyond capabilities of the current argumentation model. For a discussion about metaphors in Toulmin's model of argumentation see, e.g., BIBREF102 , BIBREF103 .",
"Articles without a clear standpoint towards the discussed controversy cannot be easily annotated with the model either. Although the matter is viewed from both sides and there might be reasons presented for either of them, the overall persuasive intention is missing and fitting such data to the argumentation framework causes disagreements. One solution might be to break the document down to paragraphs and annotate each paragraph separately, examining argumentation on a different level of granularity.",
"As introduced in section SECREF20 , there are several dimensions of an argument. The Toulmin's model focuses solely on the logos dimension. We decided to ignore the ethos dimension, because dealing with the author's credibility remains unclear, given the variety of the source web data. However, exploiting the pathos dimension of an argument is prevalent in the web data, for example as an appeal to emotions. Therefore we experimented with annotating appeal to emotions as a separate category independent of components in the logos dimension. We defined some features for the annotators how to distinguish appeal to emotions. Figurative language such as hyperbole, sarcasm, or obvious exaggerating to “spice up” the argument are the typical signs of pathos. In an extreme case, the whole argument might be purely emotional, as in the following example.",
"Doc#1698 (comment, prayer in schools) [app-to-emot: Prayer being removed from school is just the leading indicator of a nation that is ‘Falling Away’ from Jehovah. [...] And the disasters we see today are simply God’s finger writing on the wall: Mene, mene, Tekel, Upharsin; that is, God has weighed America in the balances, and we’ve been found wanting. No wonder 50 million babies have been aborted since 1973. [...]]",
"We kept annotations on the pathos dimension as simple as possible (with only one appeal to emotions label), but the resulting agreement was unsatisfying ( INLINEFORM0 0.30) even after several annotation iterations. Appeal to emotions is considered as a type of fallacy BIBREF104 , BIBREF18 . Given the results, we assume that more carefully designed approach to fallacy annotation should be applied. To the best of our knowledge, there have been very few research works on modeling fallacies similarly to arguments on the discourse level BIBREF105 . Therefore the question, in which detail and structure fallacies should be annotated, remains open. For the rest of the paper, we thus focus on the logos dimension solely.",
"Some of the educational topics under examination relate to young children (e.g., redshirting or mainstreaming); therefore we assume that the majority of participants in discussions are their parents. We observed that many documents related to these topics contain narratives. Sometimes the story telling is meant as a support for the argument, but there are documents where the narrative has no intention to persuade and is simply a story sharing.",
"There is no widely accepted theory of the role of narratives among argumentation scholars. According to Fisher.1987, humans are storytellers by nature, and the “reason” in argumentation is therefore better understood in and through the narratives. He found that good reasons often take the form of narratives. Hoeken.Fikkers.2014 investigated how integration of explicit argumentative content into narratives influences issue-relevant thinking and concluded that identifying with the character being in favor of the issue yielded a more positive attitude toward the issue. In a recent research, Bex.2011 proposes an argumentative-narrative model of reasoning with evidence, further elaborated in BIBREF106 ; also Niehaus.et.al.2012 proposes a computational model of narrative persuasion.",
"Stemming from another research field, LeytonEscobar2014 found that online community members who use and share narratives have higher participation levels and that narratives are useful tools to build cohesive cultures and increase participation. Betsch.et.al.2010 examined influencing vaccine intentions among parents and found that narratives carry more weight than statistics."
],
[
"This section described two annotation studies that deal with argumentation in user-generated Web content on different levels of detail. In section SECREF44 , we argued for a need of document-level distinction of persuasiveness. We annotated 990 comments and forum posts, reaching moderate inter-annotator agreement (Fleiss' INLINEFORM0 0.59). Section SECREF51 motivated the selection of a model for micro-level argument annotation, proposed its extension based on pre-study observations, and outlined the annotation set-up. This annotation study resulted into 340 documents annotated with the modified Toulmin's model and reached moderate inter-annotator agreement in the logos dimension (Krippendorff's INLINEFORM1 0.48). These results make the annotated corpora suitable for training and evaluation computational models and each of these two annotation studies will have their experimental counterparts in the following section."
],
[
"This section presents experiments conducted on the annotated corpora introduced in section SECREF4 . We put the main focus on identifying argument components in the discourse. To comply with the machine learning terminology, in this section we will use the term domain as an equivalent to a topic (remember that our dataset includes six different topics; see section SECREF38 ).",
"We evaluate three different scenarios. First, we report ten-fold cross validation over a random ordering of the entire data set. Second, we deal with in-domain ten-fold cross validation for each of the six domains. Third, in order to evaluate the domain portability of our approach, we train the system on five domains and test on the remaining one for all six domains (which we report as cross-domain validation)."
],
[
"In the following experiment, we focus on automatic identification of arguments in the discourse. Our approach is based on supervised and semi-supervised machine learning methods on the gold data Toulmin dataset introduced in section SECREF51 .",
"An argument consists of different components (such as premises, backing, etc.) which are implicitly linked to the claim. In principle one document can contain multiple independent arguments. However, only 4% of the documents in our dataset contain arguments for both sides of the issue. Thus we simplify the task and assume there is only one argument per document.",
"Given the low inter-annotator agreement on the pathos dimension (Table TABREF77 ), we focus solely on recognizing the logical dimension of argument. The pathos dimension of argument remains an open problem for a proper modeling as well as its later recognition.",
"Since the smallest annotation unit is a token and the argument components do not overlap, we approach identification of argument components as a sequence labeling problem. We use the BIO encoding, so each token belongs to one of the following 11 classes: O (not a part of any argument component), Backing-B, Backing-I, Claim-B, Claim-I, Premise-B, Premise-I, Rebuttal-B, Rebuttal-I, Refutation-B, Refutation-I. This is the minimal encoding that is able to distinguish two adjacent argument components of the same type. In our data, 48% of all adjacent argument components of the same type are direct neighbors (there are no \"O\" tokens in between).",
"We report Macro- INLINEFORM0 score and INLINEFORM1 scores for each of the 11 classes as the main evaluation metric. This evaluation is performed on the token level, and for each token the predicted label must exactly match the gold data label (classification of tokens into 11 classes).",
"As instances for the sequence labeling model, we chose sentences rather than tokens. During our initial experiments, we observed that building a sequence labeling model for recognizing argument components as sequences of tokens is too fine-grained, as a single token does not convey enough information that could be encoded as features for a machine learner. However, as discussed in section UID73 , the annotations were performed on data pre-segmented to sentences and annotating tokens was necessary only when the sentence segmentation was wrong or one sentence contained multiple argument components. Our corpus consists of 3899 sentences, from which 2214 sentences (57%) contain no argument component. From the remaining ones, only 50 sentences (1%) have more than one argument component. Although in 19 cases (0.5%) the sentence contains a Claim-Premise pair which is an important distinction from the argumentation perspective, given the overall small number of such occurrences, we simplify the task by treating each sentence as if it has either one argument component or none.",
"The approximation with sentence-level units is explained in the example in Figure FIGREF112 . In order to evaluate the expected performance loss using this approximation, we used an oracle that always predicts the correct label for the unit (sentence) and evaluated it against the true labels (recall that the evaluation against the true gold labels is done always on token level). We lose only about 10% of macro INLINEFORM0 score (0.906) and only about 2% of accuracy (0.984). This performance is still acceptable, while allowing to model sequences where the minimal unit is a sentence.",
"Table TABREF114 shows the distribution of the classes in the gold data Toulmin, where the labeling was already mapped to the sentences. The little presence of rebuttal and refutation (4 classes account only for 3.4% of the data) makes this dataset very unbalanced.",
"We chose SVMhmm BIBREF111 implementation of Structural Support Vector Machines for sequence labeling. Each sentence ( INLINEFORM0 ) is represented as a vector of real-valued features.",
"We defined the following feature sets:",
"FS0: Baseline lexical features",
"word uni-, bi-, and tri-grams (binary)",
"FS1: Structural, morphological, and syntactic features",
"First and last 3 tokens. Motivation: these tokens may contain discourse markers or other indicators for argument components, such as “therefore” and “since” for premises or “think” and “believe” for claims.",
"Relative position in paragraph and relative position in document. Motivation: We expect that claims are more likely to appear at the beginning or at the end of the document.",
"Number of POS 1-3 grams, dependency tree depth, constituency tree production rules, and number of sub-clauses. Based on BIBREF113 .",
"FS2: Topic and sentiment features",
"30 features taken from a vector representation of the sentence obtained by using Gibbs sampling on LDA model BIBREF114 , BIBREF115 with 30 topics trained on unlabeled data from the raw corpus. Motivation: Topic representation of a sentence might be valuable for detecting off-topic sentences, namely non-argument components.",
"Scores for five sentiment categories (from very negative to very positive) obtained from Stanford sentiment analyzer BIBREF116 . Motivation: Claims usually express opinions and carry sentiment.",
"FS3: Semantic, coreference, and discourse features",
"Binary features from Clear NLP Semantic Role Labeler BIBREF117 . Namely, we extract agent, predicate + agent, predicate + agent + patient + (optional) negation, argument type + argument value, and discourse marker which are based on PropBank semantic role labels. Motivation: Exploit the semantics of Capturing the semantics of the sentences.",
"Binary features from Stanford Coreference Chain Resolver BIBREF118 , e.g., presence of the sentence in a chain, transition type (i.e., nominal–pronominal), distance to previous/next sentences in the chain, or number of inter-sentence coreference links. Motivation: Presence of coreference chains indicates links outside the sentence and thus may be informative, for example, for classifying whether the sentence is a part of a larger argument component.",
"Results of a PTDB-style discourse parser BIBREF119 , namely the type of discourse relation (explicit, implicit), presence of discourse connectives, and attributions. Motivation: It has been claimed that discourse relations play a role in argumentation mining BIBREF120 .",
"FS4: Embedding features",
"300 features from word embedding vectors using word embeddings trained on part of the Google News dataset BIBREF121 . In particular, we sum up embedding vectors (dimensionality 300) of each word, resulting into a single vector for the entire sentence. This vector is then directly used as a feature vector. Motivation: Embeddings helped to achieve state-of-the-art results in various NLP tasks BIBREF116 , BIBREF122 .",
"Except the baseline lexical features, all feature types are extracted not only for the current sentence INLINEFORM0 , but also for INLINEFORM1 preceding and subsequent sentences, namely INLINEFORM2 , INLINEFORM3 , INLINEFORM4 INLINEFORM5 , INLINEFORM6 , where INLINEFORM7 was empirically set to 4. Each feature is then represented with a prefix to determine its relative position to the current sequence unit.",
"Let us first discuss the upper bounds of the system. Performance of the three human annotators is shown in the first column of Table TABREF139 (results are obtained from a cumulative confusion matrix). The overall Macro- INLINEFORM0 score is 0.602 (accuracy 0.754). If we look closer at the different argument components, we observe that humans are good at predicting claims, premises, backing and non-argumentative text (about 0.60-0.80 INLINEFORM1 ), but on rebuttal and refutation they achieve rather low scores. Without these two components, the overall human Macro- INLINEFORM2 would be 0.707. This trend follows the inter-annotator agreement scores, as discussed in section UID75 .",
"In our experiments, the feature sets were combined in the bottom-up manner, starting with the simple lexical features (FS0), adding structural and syntactic features (FS1), then adding topic and sentiment features (FS2), then features reflecting the discourse structure (FS3), and finally enriched with completely unsupervised latent vector space representation (FS4). In addition, we were gradually removing the simple features (e.g., without lexical features, without syntactic features, etc.) to test the system with more “abstract” feature sets (feature ablation). The results are shown in Table TABREF139 .",
"The overall best performance (Macro- INLINEFORM0 0.251) was achieved using the rich feature sets (01234 and 234) and significantly outperformed the baseline as well as other feature sets. Classification of non-argumentative text (the \"O\" class) yields about 0.7 INLINEFORM1 score even in the baseline setting. The boundaries of claims (Cla-B), premises (Pre-B), and backing (Bac-B) reach in average lower scores then their respective inside tags (Cla-I, Pre-I, Bac-I). It can be interpreted such that the system is able to classify that a certain sentence belongs to a certain argument component, but the distinction whether it is a beginning of the argument component is harder. The very low numbers for rebuttal and refutation have two reasons. First, these two argument components caused many disagreements in the annotations, as discussed in section UID86 , and were hard to recognize for the humans too. Second, these four classes have very few instances in the corpus (about 3.4%, see Table TABREF114 ), so the classifier suffers from the lack of training data.",
"The results for the in-domain cross validation scenario are shown in Table TABREF140 . Similarly to the cross-validation scenario, the overall best results were achieved using the largest feature set (01234). For mainstreaming and red-shirting, the best results were achieved using only the feature set 4 (embeddings). These two domains contain also fewer documents, compared to other domains (refer to Table TABREF71 ). We suspect that embeddings-based features convey important information when not enough in-domain data are available. This observation will become apparent in the next experiment.",
"The cross-domain experiments yield rather poor results for most of the feature combinations (Table TABREF141 ). However, using only feature set 4 (embeddings), the system performance increases rapidly, so it is even comparable to numbers achieved in the in-domain scenario. These results indicate that embedding features generalize well across domains in our task of argument component identification. We leave investigating better performing vector representations, such as paragraph vectors BIBREF123 , for future work.",
"Error analysis based on the probabilistic confusion matrix BIBREF124 shown in Table TABREF142 reveals further details. About a half of the instances for each class are misclassified as non-argumentative (the \"O\" prediction).",
"Backing-B is often confused with Premise-B (12%) and Backing-I with Premise-I (23%). Similarly, Premise-I is misclassified as Backing-I in 9%. This shows that distinguishing between backing and premises is not easy because these two components are similar such that they support the claim, as discussed in section UID86 . We can also see that the misclassification is consistent among *-B and *-I tags.",
"Rebuttal is often misclassified as Premise (28% for Rebuttal-I and 18% for Rebuttal-B; notice again the consistency in *-B and *-I tags). This is rather surprising, as one would expect that rebuttal would be confused with a claim, because its role is to provide an opposing view.",
"Refutation-B and Refutation-I is misclassified as Premise-I in 19% and 27%, respectively. This finding confirms the discussion in section UID86 , because the role of refutation is highly context-dependent. In a pragmatic perspective, it is put forward to indirectly support the claim by attacking the rebuttal, thus having a similar function to the premise.",
"We manually examined miss-classified examples produced the best-performing system to find out which phenomena pose biggest challenges. Properly detecting boundaries of argument components caused problems, as shown in Figure FIGREF146 (a). This goes in line with the granularity annotation difficulties discussed in section UID86 . The next example in Figure FIGREF146 (b) shows that even if boundaries of components were detected precisely, the distinction between premise and backing fails. The example also shows that in some cases, labeling on clause level is required (left-hand side claim and premise) but the approximation in the system cannot cope with this level of detail (as explained in section UID111 ). Confusing non-argumentative text and argument components by the system is sometimes plausible, as is the case of the last rhetorical question in Figure FIGREF146 (c). On the other hand, the last example in Figure FIGREF146 (d) shows that some claims using figurative language were hard to be identified. The complete predictions along with the gold data are publicly available.",
"SVMhmm offers many hyper-parameters with suggested default values, from which three are of importance. Parameter INLINEFORM0 sets the order of dependencies of transitions in HMM, parameter INLINEFORM1 sets the order of dependencies of emissions in HMM, and parameter INLINEFORM2 represents a trading-off slack versus magnitude of the weight-vector. For all experiments, we set all the hyper-parameters to their default values ( INLINEFORM3 , INLINEFORM4 , INLINEFORM5 ). Using the best performing feature set from Table TABREF139 , we experimented with a grid search over different values ( INLINEFORM6 , INLINEFORM7 , INLINEFORM8 ) but the results did not outperform the system trained with default parameter values.",
"The INLINEFORM0 scores might seem very low at the first glance. One obvious reason is the actual performance of the system, which gives a plenty of room for improvement in the future. But the main cause of low INLINEFORM2 numbers is the evaluation measure — using 11 classes on the token level is very strict, as it penalizes a mismatch in argument component boundaries the same way as a wrongly predicted argument component type. Therefore we also report two another evaluation metrics that help to put our results into a context.",
"Krippendorff's INLINEFORM0 — It was also used for evaluating inter-annotator agreement (see section UID75 ).",
"Boundary similarity BIBREF125 — Using this metric, the problem is treated solely as a segmentation task without recognizing the argument component types.",
"As shown in Table TABREF157 (the Macro- INLINEFORM0 scores are repeated from Table TABREF139 ), the best-performing system achieves 0.30 score using Krippendorf's INLINEFORM1 , which is in the middle between the baseline and the human performance (0.48) but is considered as poor from the inter-annotator agreement point of view BIBREF54 . The boundary similarity metrics is not directly suitable for evaluating argument component classification, but reveals a sub-task of finding the component boundaries. The best system achieved 0.32 on this measure. Vovk2013MT used this measure to annotate argument spans and his annotators achieved 0.36 boundary similarity score. Human annotators in BIBREF125 reached 0.53 boundary similarity score.",
"The overall performance of the system is also affected by the accuracy of individual NPL tools used for extracting features. One particular problem is that the preprocessing models we rely on (POS, syntax, semantic roles, coreference, discourse; see section UID115 ) were trained on newswire corpora, so one has to expect performance drop when applied on user-generated content. This is however a well-known issue in NLP BIBREF126 , BIBREF127 , BIBREF128 .",
"To get an impression of the actual performance of the system on the data, we also provide the complete output of our best performing system in one PDF document together with the gold annotations in the logos dimension side by side in the accompanying software package. We believe this will help the community to see the strengths of our model as well as possible limitations of our current approaches."
],
[
"Let us begin with summarizing answers to the research questions stated in the introduction. First, as we showed in section UID55 , existing argumentation theories do offer models for capturing argumentation in user-generated content on the Web. We built upon the Toulmin's model and proposed some extensions.",
"Second, as compared to the negative experiences with annotating using Walton's schemes (see sections UID52 and SECREF31 ), our modified Toulmin's model offers a trade-off between its expressiveness and annotation reliability. However, we found that the capabilities of the model to capture argumentation depend on the register and topic, the length of the document, and inherently on the literary devices and structures used for expressing argumentation as these properties influenced the agreement among annotators.",
"Third, there are aspects of online argumentation that lack their established theoretical counterparts, such as rhetorical questions, figurative language, narratives, and fallacies in general. We tried to model some of them in the pathos dimension of argument (section UID103 ), but no satisfying agreement was reached. Furthermore, we dealt with a step that precedes argument analysis by filtering documents given their persuasiveness with respect to the controversy. Finally, we proposed a computational model based on machine learning for identifying argument components (section SECREF108 ). In this identification task, we experimented with a wide range of linguistically motivated features and found that (1) the largest feature set (including n-grams, structural features, syntactic features, topic distribution, sentiment distribution, semantic features, coreference feaures, discourse features, and features based on word embeddings) performs best in both in-domain and all-data cross validation, while (2) features based only on word embeddings yield best results in cross-domain evaluation.",
"Since there is no one-size-fits-all argumentation theory to be applied to actual data on the Web, the argumentation model and an annotation scheme for argumentation mining is a function of the task requirements and the corpus properties. Its selection should be based on the data at hand and the desired application. Given the proposed use-case scenarios (section SECREF1 ) and the results of our annotation study (section SECREF51 ), we recommend a scheme based on Toulmin's model for short documents, such as comments or forum posts."
]
],
"section_name": [
"Introduction",
"Our contributions",
"Theoretical background",
"Argumentation models",
"Dimensions of argument",
"Original Toulmin's model",
"Related work in computational linguistics",
"Argumentation Mining",
"Stance detection",
"Online persuasion",
"Annotation studies and corpus creation",
"Topics and registers",
"Raw corpus statistics",
"Annotation study 1: Identifying persuasive documents in forums and comments",
"Annotation study 2: Annotating micro-structure of arguments",
"Summary of annotation studies",
"Experiments",
"Identification of argument components",
"Conclusions"
]
} | {
"answers": [
{
"annotation_id": [
"78979e74f92942e4286003168bb6eee9ebf9f4c4",
"aeffb1bb4314e5ca92a1cd121985e696e77d8def"
],
"answer": [
{
"evidence": [
"As a main field of interest in the current study, we chose controversies in education. One distinguishing feature of educational topics is their breadth of sub-topics and points of view, as they attract researchers, practitioners, parents, students, or policy-makers. We assume that this diversity leads to the linguistic variability of the education topics and thus represents a challenge for NLP. In a cooperation with researchers from the German Institute for International Educational Research we identified the following current controversial topics in education in English-speaking countries: (1) homeschooling, (2) public versus private schools, (3) redshirting — intentionally delaying the entry of an age-eligible child into kindergarten, allowing their child more time to mature emotionally and physically BIBREF51 , (4) prayer in schools — whether prayer in schools should be allowed and taken as a part of education or banned completely, (5) single-sex education — single-sex classes (males and females separate) versus mixed-sex classes (“co-ed”), and (6) mainstreaming — including children with special needs into regular classes."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"In a cooperation with researchers from the German Institute for International Educational Research we identified the following current controversial topics in education in English-speaking countries: (1) homeschooling, (2) public versus private schools, (3) redshirting — intentionally delaying the entry of an age-eligible child into kindergarten, allowing their child more time to mature emotionally and physically BIBREF51 , (4) prayer in schools — whether prayer in schools should be allowed and taken as a part of education or banned completely, (5) single-sex education — single-sex classes (males and females separate) versus mixed-sex classes (“co-ed”), and (6) mainstreaming — including children with special needs into regular classes."
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"As a main field of interest in the current study, we chose controversies in education. One distinguishing feature of educational topics is their breadth of sub-topics and points of view, as they attract researchers, practitioners, parents, students, or policy-makers. We assume that this diversity leads to the linguistic variability of the education topics and thus represents a challenge for NLP. In a cooperation with researchers from the German Institute for International Educational Research we identified the following current controversial topics in education in English-speaking countries: (1) homeschooling, (2) public versus private schools, (3) redshirting — intentionally delaying the entry of an age-eligible child into kindergarten, allowing their child more time to mature emotionally and physically BIBREF51 , (4) prayer in schools — whether prayer in schools should be allowed and taken as a part of education or banned completely, (5) single-sex education — single-sex classes (males and females separate) versus mixed-sex classes (“co-ed”), and (6) mainstreaming — including children with special needs into regular classes."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"In a cooperation with researchers from the German Institute for International Educational Research we identified the following current controversial topics in education in English-speaking countries: (1) homeschooling, (2) public versus private schools, (3) redshirting — intentionally delaying the entry of an age-eligible child into kindergarten, allowing their child more time to mature emotionally and physically BIBREF51 , (4) prayer in schools — whether prayer in schools should be allowed and taken as a part of education or banned completely, (5) single-sex education — single-sex classes (males and females separate) versus mixed-sex classes (“co-ed”), and (6) mainstreaming — including children with special needs into regular classes."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"746f142b0c230013c56a6e47f530eb58db6ef2bd",
"a4750aab7f28a204969e136f0e3f715e72f95820"
],
"answer": [
{
"evidence": [
"We call the model as a modified Toulmin's model. It contains five argument components, namely claim, premise, backing, rebuttal, and refutation. When annotating a document, any arbitrary token span can be labeled with an argument component; the components do not overlap. The spans are not known in advance and the annotator thus chooses the span and the component type at the same time. All components are optional (they do not have to be present in the argument) except the claim, which is either explicit or implicit (see above). If a token span is not labeled by any argument component, it is not considered as a part of the argument and is later denoted as none (this category is not assigned by the annotators)."
],
"extractive_spans": [
"claim, premise, backing, rebuttal, and refutation"
],
"free_form_answer": "",
"highlighted_evidence": [
"We call the model as a modified Toulmin's model. It contains five argument components, namely claim, premise, backing, rebuttal, and refutation. When annotating a document, any arbitrary token span can be labeled with an argument component; the components do not overlap. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We call the model as a modified Toulmin's model. It contains five argument components, namely claim, premise, backing, rebuttal, and refutation. When annotating a document, any arbitrary token span can be labeled with an argument component; the components do not overlap. The spans are not known in advance and the annotator thus chooses the span and the component type at the same time. All components are optional (they do not have to be present in the argument) except the claim, which is either explicit or implicit (see above). If a token span is not labeled by any argument component, it is not considered as a part of the argument and is later denoted as none (this category is not assigned by the annotators)."
],
"extractive_spans": [
"claim",
"premise",
"backing",
"rebuttal",
"refutation"
],
"free_form_answer": "",
"highlighted_evidence": [
" It contains five argument components, namely claim, premise, backing, rebuttal, and refutation."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"06b694fe757d0133a819e8f1c0909570052bb3e3",
"139179693aac0e1851712550af8e885f5bfeddd9"
],
"answer": [
{
"evidence": [
"We chose SVMhmm BIBREF111 implementation of Structural Support Vector Machines for sequence labeling. Each sentence ( INLINEFORM0 ) is represented as a vector of real-valued features."
],
"extractive_spans": [],
"free_form_answer": "Structural Support Vector Machine",
"highlighted_evidence": [
"We chose SVMhmm BIBREF111 implementation of Structural Support Vector Machines for sequence labeling.",
"SVM implementation of "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We chose SVMhmm BIBREF111 implementation of Structural Support Vector Machines for sequence labeling. Each sentence ( INLINEFORM0 ) is represented as a vector of real-valued features."
],
"extractive_spans": [
"SVMhmm "
],
"free_form_answer": "",
"highlighted_evidence": [
"We chose SVMhmm BIBREF111 implementation of Structural Support Vector Machines for sequence labeling."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"5cd81c4f6086a10ba24ffdc9193af60af742ec6f",
"6440edd1f184917f771c70e33fe9a5aa7dbeb1e0"
],
"answer": [
{
"evidence": [
"Since we were also interested in whether argumentation differs across registers, we included four different registers — namely (1) user comments to newswire articles or to blog posts, (2) posts in discussion forums (forum posts), (3) blog posts, and (4) newswire articles. Throughout this work, we will refer to each article, blog post, comment, or forum posts as a document. This variety of sources covers mainly user-generated content except newswire articles which are written by professionals and undergo an editing procedure by the publisher. Since many publishers also host blog-like sections on their portals, we consider as blog posts all content that is hosted on personal blogs or clearly belong to a blog category within a newswire portal."
],
"extractive_spans": [
"user comments to newswire articles or to blog posts",
"forum posts",
"blog posts",
"newswire articles"
],
"free_form_answer": "",
"highlighted_evidence": [
"Since we were also interested in whether argumentation differs across registers, we included four different registers — namely (1) user comments to newswire articles or to blog posts, (2) posts in discussion forums (forum posts), (3) blog posts, and (4) newswire articles. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"As a main field of interest in the current study, we chose controversies in education. One distinguishing feature of educational topics is their breadth of sub-topics and points of view, as they attract researchers, practitioners, parents, students, or policy-makers. We assume that this diversity leads to the linguistic variability of the education topics and thus represents a challenge for NLP. In a cooperation with researchers from the German Institute for International Educational Research we identified the following current controversial topics in education in English-speaking countries: (1) homeschooling, (2) public versus private schools, (3) redshirting — intentionally delaying the entry of an age-eligible child into kindergarten, allowing their child more time to mature emotionally and physically BIBREF51 , (4) prayer in schools — whether prayer in schools should be allowed and taken as a part of education or banned completely, (5) single-sex education — single-sex classes (males and females separate) versus mixed-sex classes (“co-ed”), and (6) mainstreaming — including children with special needs into regular classes.",
"Since we were also interested in whether argumentation differs across registers, we included four different registers — namely (1) user comments to newswire articles or to blog posts, (2) posts in discussion forums (forum posts), (3) blog posts, and (4) newswire articles. Throughout this work, we will refer to each article, blog post, comment, or forum posts as a document. This variety of sources covers mainly user-generated content except newswire articles which are written by professionals and undergo an editing procedure by the publisher. Since many publishers also host blog-like sections on their portals, we consider as blog posts all content that is hosted on personal blogs or clearly belong to a blog category within a newswire portal."
],
"extractive_spans": [
"refer to each article, blog post, comment, or forum posts as a document"
],
"free_form_answer": "",
"highlighted_evidence": [
"In a cooperation with researchers from the German Institute for International Educational Research we identified the following current controversial topics in education in English-speaking countries: (1) homeschooling, (2) public versus private schools, (3) redshirting — intentionally delaying the entry of an age-eligible child into kindergarten, allowing their child more time to mature emotionally and physically BIBREF51 , (4) prayer in schools — whether prayer in schools should be allowed and taken as a part of education or banned completely, (5) single-sex education — single-sex classes (males and females separate) versus mixed-sex classes (“co-ed”), and (6) mainstreaming — including children with special needs into regular classes.\n\nSince we were also interested in whether argumentation differs across registers, we included four different registers — namely (1) user comments to newswire articles or to blog posts, (2) posts in discussion forums (forum posts), (3) blog posts, and (4) newswire articles. Throughout this work, we will refer to each article, blog post, comment, or forum posts as a document. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"d8b4e49946a89214a31043ac5304b5f5661a8f0d"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"5b73834963fd0ebd0e733fd3429b78d53eb3f92e"
],
"answer": [
{
"evidence": [
"As a main field of interest in the current study, we chose controversies in education. One distinguishing feature of educational topics is their breadth of sub-topics and points of view, as they attract researchers, practitioners, parents, students, or policy-makers. We assume that this diversity leads to the linguistic variability of the education topics and thus represents a challenge for NLP. In a cooperation with researchers from the German Institute for International Educational Research we identified the following current controversial topics in education in English-speaking countries: (1) homeschooling, (2) public versus private schools, (3) redshirting — intentionally delaying the entry of an age-eligible child into kindergarten, allowing their child more time to mature emotionally and physically BIBREF51 , (4) prayer in schools — whether prayer in schools should be allowed and taken as a part of education or banned completely, (5) single-sex education — single-sex classes (males and females separate) versus mixed-sex classes (“co-ed”), and (6) mainstreaming — including children with special needs into regular classes."
],
"extractive_spans": [
"linguistic variability"
],
"free_form_answer": "",
"highlighted_evidence": [
"One distinguishing feature of educational topics is their breadth of sub-topics and points of view, as they attract researchers, practitioners, parents, students, or policy-makers. We assume that this diversity leads to the linguistic variability of the education topics and thus represents a challenge for NLP."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five",
"five",
"five"
],
"paper_read": [
"",
"",
"",
"",
"",
""
],
"question": [
"Do they report results only on English data?",
"What argument components do the ML methods aim to identify?",
"Which machine learning methods are used in experiments?",
"How is the data in the new corpus come sourced?",
"What argumentation phenomena encounter in actual data are now accounted for by this work?",
"What challenges do different registers and domains pose to this task?"
],
"question_id": [
"2cd37743bcc7ea3bd405ce6d91e79e5339d7642e",
"eac9dae3492e17bc49c842fb566f464ff18c049b",
"7697baf8d8d582c1f664a614f6332121061f87db",
"1cb100182508cf55b3509283c0e2bbcd527d625e",
"206739417251064b910ae9e5ff096e867ee10fb8",
"d6401cece55a14d2a35ba797a0878dfe2deabedc"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"search_query": [
"",
"",
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
"",
"",
""
]
} | {
"caption": [],
"file": []
} | [
"Which machine learning methods are used in experiments?"
] | [
[
"1601.02403-Identification of argument components-8"
]
] | [
"Structural Support Vector Machine"
] | 173 |
1912.03627 | A Multi Purpose and Large Scale Speech Corpus in Persian and English for Speaker and Speech Recognition: The Deepmine Database | DeepMine is a speech database in Persian and English designed to build and evaluate text-dependent, text-prompted, and text-independent speaker verification, as well as Persian speech recognition systems. It contains more than 1850 speakers and 540 thousand recordings overall, more than 480 hours of speech are transcribed. It is the first public large-scale speaker verification database in Persian, the largest public text-dependent and text-prompted speaker verification database in English, and the largest public evaluation dataset for text-independent speaker verification. It has a good coverage of age, gender, and accents. We provide several evaluation protocols for each part of the database to allow for research on different aspects of speaker verification. We also provide the results of several experiments that can be considered as baselines: HMM-based i-vectors for text-dependent speaker verification, and HMM-based as well as state-of-the-art deep neural network based ASR. We demonstrate that the database can serve for training robust ASR models. | {
"paragraphs": [
[
"Nowadays deep learning techniques outperform the other conventional methods in most of the speech-related tasks. Training robust deep neural networks for each task depends on the availability of powerful processing GPUs, as well as standard and large scale datasets. In text-independent speaker verification, large-scale datasets are available, thanks to the NIST SRE evaluations and other data collection projects such as VoxCeleb BIBREF0.",
"In text-dependent speaker recognition, experiments with end-to-end architectures conducted on large proprietary databases have demonstrated their superiority over traditional approaches BIBREF1. Yet, contrary to text-independent speaker recognition, text-dependent speaker recognition lacks large-scale publicly available databases. The two most well-known datasets are probably RSR2015 BIBREF2 and RedDots BIBREF3. The former contains speech data collected from 300 individuals in a controlled manner, while the latter is used primarily for evaluation rather than training, due to its small number of speakers (only 64). Motivated by this lack of large-scale dataset for text-dependent speaker verification, we chose to proceed with the collection of the DeepMine dataset, which we expect to become a standard benchmark for the task.",
"Apart from speaker recognition, large amounts of training data are required also for training automatic speech recognition (ASR) systems. Such datasets should not only be large in size, they should also be characterized by high variability with respect to speakers, age and dialects. While several datasets with these properties are available for languages like English, Mandarin, French, this is not the case for several other languages, such as Persian. To this end, we proceeded with collecting a large-scale dataset, suitable for building robust ASR models in Persian.",
"The main goal of the DeepMine project was to collect speech from at least a few thousand speakers, enabling research and development of deep learning methods. The project started at the beginning of 2017, and after designing the database and the developing Android and server applications, the data collection began in the middle of 2017. The project finished at the end of 2018 and the cleaned-up and final version of the database was released at the beginning of 2019. In BIBREF4, the running project and its data collection scenarios were described, alongside with some preliminary results and statistics. In this paper, we announce the final and cleaned-up version of the database, describe its different parts and provide various evaluation setups for each part. Finally, since the database was designed mainly for text-dependent speaker verification purposes, some baseline results are reported for this task on the official evaluation setups. Additional baseline results are also reported for Persian speech recognition. However, due to the space limitation in this paper, the baseline results are not reported for all the database parts and conditions. They will be defined and reported in the database technical documentation and in a future journal paper."
],
[
"DeepMine is publicly available for everybody with a variety of licenses for different users. It was collected using crowdsourcing BIBREF4. The data collection was done using an Android application. Each respondent installed the application on his/her personal device and recorded several phrases in different sessions. The Android application did various checks on each utterance and if it passed all of them, the respondent was directed to the next phrase. For more information about data collection scenario, please refer to BIBREF4."
],
[
"In order to clean-up the database, the main post-processing step was to filter out problematic utterances. Possible problems include speaker word insertions (e.g. repeating some part of a phrase), deletions, substitutions, and involuntary disfluencies. To detect these, we implemented an alignment stage, similar to the second alignment stage in the LibriSpeech project BIBREF5. In this method, a custom decoding graph was generated for each phrase. The decoding graph allows for word skipping and word insertion in the phrase.",
"For text-dependent and text-prompted parts of the database, such errors are not allowed. Hence, any utterances with errors were removed from the enrollment and test lists. For the speech recognition part, a sub-part of the utterance which is correctly aligned to the corresponding transcription is kept. After the cleaning step, around 190 thousand utterances with full transcription and 10 thousand with sub-part alignment have remained in the database."
],
[
"After processing the database and removing problematic respondents and utterances, 1969 respondents remained in the database, with 1149 of them being male and 820 female. 297 of the respondents could not read English and have therefore read only the Persian prompts. About 13200 sessions were recorded by females and similarly, about 9500 sessions by males, i.e. women are over-represented in terms of sessions, even though their number is 17% smaller than that of males. Other useful statistics related to the database are shown in Table TABREF4.",
"The last status of the database, as well as other related and useful information about its availability can be found on its website, together with a limited number of samples."
],
[
"The DeepMine database consists of three parts. The first one contains fixed common phrases to perform text-dependent speaker verification. The second part consists of random sequences of words useful for text-prompted speaker verification, and the last part includes phrases with word- and phoneme-level transcription, useful for text-independent speaker verification using a random phrase (similar to Part4 of RedDots). This part can also serve for Persian ASR training. Each part is described in more details below. Table TABREF11 shows the number of unique phrases in each part of the database. For the English text-dependent part, the following phrases were selected from part1 of the RedDots database, hence the RedDots can be used as an additional training set for this part:",
"“My voice is my password.”",
"“OK Google.”",
"“Artificial intelligence is for real.”",
"“Actions speak louder than words.”",
"“There is no such thing as a free lunch.”"
],
[
"This part contains a set of fixed phrases which are used to verify speakers in text-dependent mode. Each speaker utters 5 Persian phrases, and if the speaker can read English, 5 phrases selected from Part1 of the RedDots database are also recorded.",
"We have created three experimental setups with different numbers of speakers in the evaluation set. For each setup, speakers with more recording sessions are included in the evaluation set and the rest of the speakers are used for training in the background set (in the database, all background sets are basically training data). The rows in Table TABREF13 corresponds to the different experimental setups and shows the numbers of speakers in each set. Note that, for English, we have filtered the (Persian native) speakers by the ability to read English. Therefore, there are fewer speakers in each set for English than for Persian. There is a small “dev” set in each setup which can be used for parameter tuning to prevent over-tuning on the evaluation set.",
"For each experimental setup, we have defined several official trial lists with different numbers of enrollment utterances per trial in order to investigate the effects of having different amounts of enrollment data. All trials in one trial list have the same number of enrollment utterances (3 to 6) and only one test utterance. All enrollment utterances in a trial are taken from different consecutive sessions and the test utterance is taken from yet another session. From all the setups and conditions, the 100-spk with 3-session enrollment (3-sess) is considered as the main evaluation condition. In Table TABREF14, the number of trials for Persian 3-sess are shown for the different types of trial in the text-dependent speaker verification (SV). Note that for Imposter-Wrong (IW) trials (i.e. imposter speaker pronouncing wrong phrase), we merely create one wrong trial for each Imposter-Correct (IC) trial to limit the huge number of possible trials for this case. So, the number of trials for IC and IW cases are the same."
],
[
"For this part, in each session, 3 random sequences of Persian month names are shown to the respondent in two modes: In the first mode, the sequence consists of all 12 months, which will be used for speaker enrollment. The second mode contains a sequence of 3 month names that will be used as a test utterance. In each 8 sessions received by a respondent from the server, there are 3 enrollment phrases of all 12 months (all in just one session), and $7 \\times 3$ other test phrases, containing fewer words. For a respondent who can read English, 3 random sequences of English digits are also recorded in each session. In one of the sessions, these sequences contain all digits and the remaining ones contain only 4 digits.",
"Similar to the text-dependent case, three experimental setups with different number of speaker in the evaluation set are defined (corresponding to the rows in Table TABREF16). However, different strategy is used for defining trials: Depending on the enrollment condition (1- to 3-sess), trials are enrolled on utterances of all words from 1 to 3 different sessions (i.e. 3 to 9 utterances). Further, we consider two conditions for test utterances: seq test utterance with only 3 or 4 words and full test utterances with all words (i.e. same words as in enrollment but in different order). From all setups an all conditions, the 100-spk with 1-session enrolment (1-sess) is considered as the main evaluation condition for the text-prompted case. In Table TABREF16, the numbers of trials (sum for both seq and full conditions) for Persian 1-sess are shown for the different types of trials in the text-prompted SV. Again, we just create one IW trial for each IC trial.",
""
],
[
"In this part, 8 Persian phrases that have already been transcribed on the phone level are displayed to the respondent. These phrases are chosen mostly from news and Persian Wikipedia. If the respondent is unable to read English, instead of 5 fixed phrases and 3 random digit strings, 8 other Persian phrases are also prompted to the respondent to have exactly 24 phrases in each recording session.",
"This part can be useful at least for three potential applications. First, it can be used for text-independent speaker verification. The second application of this part (same as Part4 of RedDots) is text-prompted speaker verification using random text (instead of a random sequence of words). Finally, the third application is large vocabulary speech recognition in Persian (explained in the next sub-section).",
"Based on the recording sessions, we created two experimental setups for speaker verification. In the first one, respondents with at least 17 recording sessions are included to the evaluation set, respondents with 16 sessions to the development and the rest of respondents to the background set (can be used as training data). In the second setup, respondents with at least 8 sessions are included to the evaluation set, respondents with 6 or 7 sessions to the development and the rest of respondents to the background set. Table TABREF18 shows numbers of speakers in each set of the database for text-independent SV case.",
"For text-independent SV, we have considered 4 scenarios for enrollment and 4 scenarios for test. The speaker can be enrolled using utterances from 1, 2 or 3 consecutive sessions (1sess to 3sess) or using 8 utterances from 8 different sessions. The test speech can be one utterance (1utt) for short duration scenario or all utterances in one session (1sess) for long duration case. In addition, test speech can be selected from 5 English phrases for cross-language testing (enrollment using Persian utterances and test using English utterances). From all setups, 1sess-1utt and 1sess-1sess for 438-spk set are considered as the main evaluation setups for text-independent case. Table TABREF19 shows number of trials for these setups.",
"For text-prompted SV with random text, the same setup as text-independent case together with corresponding utterance transcriptions can be used."
],
[
"As explained before, Part3 of the DeepMine database can be used for Persian read speech recognition. There are only a few databases for speech recognition in Persian BIBREF6, BIBREF7. Hence, this part can at least partly address this problem and enable robust speech recognition applications in Persian. Additionally, it can be used for speaker recognition applications, such as training deep neural networks (DNNs) for extracting bottleneck features BIBREF8, or for collecting sufficient statistics using DNNs for i-vector training.",
"We have randomly selected 50 speakers (25 for each gender) from the all speakers in the database which have net speech (without silence parts) between 25 minutes to 50 minutes as test speakers. For each speaker, the utterances in the first 5 sessions are included to (small) test-set and the other utterances of test speakers are considered as a large-test-set. The remaining utterances of the other speakers are included in the training set. The test-set, large-test-set and train-set contain 5.9, 28.5 and 450 hours of speech respectively.",
"There are about 8300 utterances in Part3 which contain only Persian full names (i.e. first and family name pairs). Each phrase consists of several full names and their phoneme transcriptions were extracted automatically using a trained Grapheme-to-Phoneme (G2P). These utterances can be used to evaluate the performance of a systems for name recognition, which is usually more difficult than the normal speech recognition because of the lack of a reliable language model."
],
[
"Due to the space limitation, we present results only for the Persian text-dependent speaker verification and speech recognition."
],
[
"We conducted an experiment on text-dependent speaker verification part of the database, using the i-vector based method proposed in BIBREF9, BIBREF10 and applied it to the Persian portion of Part1. In this experiment, 20-dimensional MFCC features along with first and second derivatives are extracted from 16 kHz signals using HTK BIBREF11 with 25 ms Hamming windowed frames with 15 ms overlap.",
"The reported results are obtained with a 400-dimensional gender independent i-vector based system. The i-vectors are first length-normalized and are further normalized using phrase- and gender-dependent Regularized Within-Class Covariance Normalization (RWCCN) BIBREF10. Cosine distance is used to obtain speaker verification scores and phrase- and gender-dependent s-norm is used for normalizing the scores. For aligning speech frames to Gaussian components, monophone HMMs with 3 states and 8 Gaussian components in each state are used BIBREF10. We only model the phonemes which appear in the 5 Persian text-dependent phrases.",
"For speaker verification experiments, the results were reported in terms of Equal Error Rate (EER) and Normalized Detection Cost Function as defined for NIST SRE08 ($\\mathrm {NDCF_{0.01}^{min}}$) and NIST SRE10 ($\\mathrm {NDCF_{0.001}^{min}}$). As shown in Table TABREF22, in text-dependent SV there are 4 types of trials: Target-Correct and Imposter-Correct refer to trials when the pass-phrase is uttered correctly by target and imposter speakers respectively, and in same manner, Target-Wrong and Imposter-Wrong refer to trials when speakers uttered a wrong pass-phrase. In this paper, only the correct trials (i.e. Target-Correct as target trials vs Imposter-Correct as non-target trials) are considered for evaluating systems as it has been proved that these are the most challenging trials in text-dependent SV BIBREF8, BIBREF12.",
"Table TABREF23 shows the results of text-dependent experiments using Persian 100-spk and 3-sess setup. For filtering trials, the respondents' mobile brand and model were used in this experiment. In the table, the first two letters in the filter notation relate to the target trials and the second two letters (i.e. right side of the colon) relate for non-target trials. For target trials, the first Y means the enrolment and test utterances were recorded using a device with the same brand by the target speaker. The second Y letter means both recordings were done using exactly the same device model. Similarly, the first Y for non-target trials means that the devices of target and imposter speakers are from the same brand (i.e. manufacturer). The second Y means that, in addition to the same brand, both devices have the same model. So, the most difficult target trials are “NN”, where the speaker has used different a device at the test time. In the same manner, the most difficult non-target trials which should be rejected by the system are “YY” where the imposter speaker has used the same device model as the target speaker (note that it does not mean physically the same device because each speaker participated in the project using a personal mobile device). Hence, the similarity in the recording channel makes rejection more difficult.",
"The first row in Table TABREF23 shows the results for all trials. By comparing the results with the best published results on RSR2015 and RedDots BIBREF10, BIBREF8, BIBREF12, it is clear that the DeepMine database is more challenging than both RSR2015 and RedDots databases. For RSR2015, the same i-vector/HMM-based method with both RWCCN and s-norm has achieved EER less than 0.3% for both genders (Table VI in BIBREF10). The conventional Relevance MAP adaptation with HMM alignment without applying any channel-compensation techniques (i.e. without applying RWCCN and s-norm due to the lack of suitable training data) on RedDots Part1 for the male has achieved EER around 1.5% (Table XI in BIBREF10). It is worth noting that EERs for DeepMine database without any channel-compensation techniques are 2.1 and 3.7% for males and females respectively.",
"One interesting advantage of the DeepMine database compared to both RSR2015 and RedDots is having several target speakers with more than one mobile device. This is allows us to analyse the effects of channel compensation methods. The second row in Table TABREF23 corresponds to the most difficult trials where the target trials come from mobile devices with different models while imposter trials come from the same device models. It is clear that severe degradation was caused by this kind of channel effects (i.e. decreasing within-speaker similarities while increasing between-speaker similarities), especially for females.",
"The results in the third row show the condition when target speakers at the test time use exactly the same device that was used for enrollment. Comparing this row with the results in the first row proves how much improvement can be achieved when exactly the same device is used by the target speaker.",
"The results in the fourth row show the condition when imposter speakers also use the same device model at test time to fool the system. So, in this case, there is no device mismatch in all trials. By comparing the results with the third row, we can see how much degradation is caused if we only consider the non-target trials with the same device.",
"The fifth row shows similar results when the imposter speakers use device of the same brand as the target speaker but with a different model. Surprisingly, in this case, the degradation is negligible and it means that mobiles from a specific brand (manufacturer) have different recording channel properties.",
"The degraded female results in the sixth row as compared to the third row show the effect of using a different device model from the same brand for target trials. For males, the filters brings almost the same subsets of trials, which explains the very similar results in this case.",
"Looking at the first two and the last row of Table TABREF23, one can notice the significantly worse performance obtained for the female trials as compared to males. Note that these three rows include target trials where the devices used for enrollment do not necessarily match the devices used for recording test utterances. On the other hand, in rows 3 to 6, which exclude such mismatched trials, the performance for males and females is comparable. This suggest that the degraded results for females are caused by some problematic trials with device mismatch. The exact reason for this degradation is so far unclear and needs a further investigation.",
"In the last row of the table, the condition of the second row is relaxed: the target device should have different model possibly from the same brand and imposter device only needs to be from the same brand. In this case, as was expected, the performance degradation is smaller than in the second row."
],
[
"In addition to speaker verification, we present several speech recognition experiments on Part3. The experiments were performed with the Kaldi toolkit BIBREF13. For training HMM-based MonoPhone model, only 20 thousands of shortest utterances are used and for other models the whole training data is used. The DNN based acoustic model is a time-delay DNN with low-rank factorized layers and skip connections without i-vector adaptation (a modified network from one of the best performing LibriSpeech recipes). The network is shown in Table TABREF25: there are 16 F-TDNN layers, with dimension 1536 and linear bottleneck layers of dimension 256. The acoustic model is trained for 10 epochs using lattice-free maximum mutual information (LF-MMI) with cross-entropy regularization BIBREF14. Re-scoring is done using a pruned trigram language model and the size of the dictionary is around 90,000 words.",
"Table TABREF26 shows the results in terms of word error rate (WER) for different evaluated methods. As can be seen, the created database can be used to train well performing and practically usable Persian ASR models."
],
[
"In this paper, we have described the final version of a large speech corpus, the DeepMine database. It has been collected using crowdsourcing and, according to the best of our knowledge, it is the largest public text-dependent and text-prompted speaker verification database in two languages: Persian and English. In addition, it is the largest text-independent speaker verification evaluation database, making it suitable to robustly evaluate state-of-the-art methods on different conditions. Alongside these appealing properties, it comes with phone-level transcription, making it suitable to train deep neural network models for Persian speech recognition.",
"We provided several evaluation protocols for each part of the database. The protocols allow researchers to investigate the performance of different methods in various scenarios and study the effects of channels, duration and phrase text on the performance. We also provide two test sets for speech recognition: One normal test set with a few minutes of speech for each speaker and one large test set with more (30 minutes on average) speech that can be used for any speaker adaptation method.",
"As baseline results, we reported the performance of an i-vector/HMM based method on Persian text-dependent part. Moreover, we conducted speech recognition experiments using conventional HMM-based methods, as well as state-of-the-art deep neural network based method using Kaldi toolkit with promising performance. Text-dependent results have shown that the DeepMine database is more challenging than RSR2015 and RedDots databases."
],
[
"The data collection project was mainly supported by Sharif DeepMine company. The work on the paper was supported by Czech National Science Foundation (GACR) project \"NEUREM3\" No. 19-26934X and the National Programme of Sustainability (NPU II) project \"IT4Innovations excellence in science - LQ1602\".",
""
]
],
"section_name": [
"Introduction",
"Data Collection",
"Data Collection ::: Post-Processing",
"Data Collection ::: Statistics",
"DeepMine Database Parts",
"DeepMine Database Parts ::: Part1 - Text-dependent (TD)",
"DeepMine Database Parts ::: Part2 - Text-prompted (TP)",
"DeepMine Database Parts ::: Part3 - Text-independent (TI)",
"DeepMine Database Parts ::: Part3 - Speech Recognition",
"Experiments and Results",
"Experiments and Results ::: Speaker Verification Experiments",
"Experiments and Results ::: Speech Recognition Experiments",
"Conclusions",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"7ac6e2a3b9ae01543c229748687c7faedf79820b",
"8c2238996acf081b032988507f9240cafd3da6ea"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"06dc02f9d6c3dc15ca66ab46615d4ad48ddaef67",
"694d7e59cf94062bd35c011850241d221e1760b5"
],
"answer": [
{
"evidence": [
"DeepMine is publicly available for everybody with a variety of licenses for different users. It was collected using crowdsourcing BIBREF4. The data collection was done using an Android application. Each respondent installed the application on his/her personal device and recorded several phrases in different sessions. The Android application did various checks on each utterance and if it passed all of them, the respondent was directed to the next phrase. For more information about data collection scenario, please refer to BIBREF4."
],
"extractive_spans": [],
"free_form_answer": "The speech was collected from respondents using an android application.",
"highlighted_evidence": [
"DeepMine is publicly available for everybody with a variety of licenses for different users. It was collected using crowdsourcing BIBREF4. The data collection was done using an Android application. Each respondent installed the application on his/her personal device and recorded several phrases in different sessions. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"DeepMine is publicly available for everybody with a variety of licenses for different users. It was collected using crowdsourcing BIBREF4. The data collection was done using an Android application. Each respondent installed the application on his/her personal device and recorded several phrases in different sessions. The Android application did various checks on each utterance and if it passed all of them, the respondent was directed to the next phrase. For more information about data collection scenario, please refer to BIBREF4."
],
"extractive_spans": [
"Android application"
],
"free_form_answer": "",
"highlighted_evidence": [
"It was collected using crowdsourcing BIBREF4. The data collection was done using an Android application."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"46fb8d1765fe85e6c534c49f3ad8bbf1c33a482d",
"b55bb00383be46c2db9d5d758c096e65de7c7628"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"d84ce2e43ee5c79a10e3cdfbe5234dbbd2790978"
],
"answer": [
{
"evidence": [
"The DeepMine database consists of three parts. The first one contains fixed common phrases to perform text-dependent speaker verification. The second part consists of random sequences of words useful for text-prompted speaker verification, and the last part includes phrases with word- and phoneme-level transcription, useful for text-independent speaker verification using a random phrase (similar to Part4 of RedDots). This part can also serve for Persian ASR training. Each part is described in more details below. Table TABREF11 shows the number of unique phrases in each part of the database. For the English text-dependent part, the following phrases were selected from part1 of the RedDots database, hence the RedDots can be used as an additional training set for this part:",
"DeepMine Database Parts ::: Part1 - Text-dependent (TD)",
"This part contains a set of fixed phrases which are used to verify speakers in text-dependent mode. Each speaker utters 5 Persian phrases, and if the speaker can read English, 5 phrases selected from Part1 of the RedDots database are also recorded.",
"We have created three experimental setups with different numbers of speakers in the evaluation set. For each setup, speakers with more recording sessions are included in the evaluation set and the rest of the speakers are used for training in the background set (in the database, all background sets are basically training data). The rows in Table TABREF13 corresponds to the different experimental setups and shows the numbers of speakers in each set. Note that, for English, we have filtered the (Persian native) speakers by the ability to read English. Therefore, there are fewer speakers in each set for English than for Persian. There is a small “dev” set in each setup which can be used for parameter tuning to prevent over-tuning on the evaluation set.",
"Similar to the text-dependent case, three experimental setups with different number of speaker in the evaluation set are defined (corresponding to the rows in Table TABREF16). However, different strategy is used for defining trials: Depending on the enrollment condition (1- to 3-sess), trials are enrolled on utterances of all words from 1 to 3 different sessions (i.e. 3 to 9 utterances). Further, we consider two conditions for test utterances: seq test utterance with only 3 or 4 words and full test utterances with all words (i.e. same words as in enrollment but in different order). From all setups an all conditions, the 100-spk with 1-session enrolment (1-sess) is considered as the main evaluation condition for the text-prompted case. In Table TABREF16, the numbers of trials (sum for both seq and full conditions) for Persian 1-sess are shown for the different types of trials in the text-prompted SV. Again, we just create one IW trial for each IC trial.",
"Based on the recording sessions, we created two experimental setups for speaker verification. In the first one, respondents with at least 17 recording sessions are included to the evaluation set, respondents with 16 sessions to the development and the rest of respondents to the background set (can be used as training data). In the second setup, respondents with at least 8 sessions are included to the evaluation set, respondents with 6 or 7 sessions to the development and the rest of respondents to the background set. Table TABREF18 shows numbers of speakers in each set of the database for text-independent SV case."
],
"extractive_spans": [
"three experimental setups with different numbers of speakers in the evaluation set",
"three experimental setups with different number of speaker in the evaluation set are defined",
" first one, respondents with at least 17 recording sessions are included to the evaluation set, respondents with 16 sessions to the development and the rest of respondents to the background set",
"second setup, respondents with at least 8 sessions are included to the evaluation set, respondents with 6 or 7 sessions to the development and the rest of respondents to the background set"
],
"free_form_answer": "",
"highlighted_evidence": [
"The DeepMine database consists of three parts. The first one contains fixed common phrases to perform text-dependent speaker verification. The second part consists of random sequences of words useful for text-prompted speaker verification, and the last part includes phrases with word- and phoneme-level transcription, useful for text-independent speaker verification using a random phrase (similar to Part4 of RedDots).",
"DeepMine Database Parts ::: Part1 - Text-dependent (TD)\nThis part contains a set of fixed phrases which are used to verify speakers in text-dependent mode. Each speaker utters 5 Persian phrases, and if the speaker can read English, 5 phrases selected from Part1 of the RedDots database are also recorded.\n\nWe have created three experimental setups with different numbers of speakers in the evaluation set.",
"Similar to the text-dependent case, three experimental setups with different number of speaker in the evaluation set are defined (corresponding to the rows in Table TABREF16). However, different strategy is used for defining trials: Depending on the enrollment condition (1- to 3-sess), trials are enrolled on utterances of all words from 1 to 3 different sessions (i.e. 3 to 9 utterances). Further, we consider two conditions for test utterances: seq test utterance with only 3 or 4 words and full test utterances with all words (i.e. same words as in enrollment but in different order).",
"Based on the recording sessions, we created two experimental setups for speaker verification. In the first one, respondents with at least 17 recording sessions are included to the evaluation set, respondents with 16 sessions to the development and the rest of respondents to the background set (can be used as training data). In the second setup, respondents with at least 8 sessions are included to the evaluation set, respondents with 6 or 7 sessions to the development and the rest of respondents to the background set. ",
"two experimental setups for speaker verification"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"1748ea1108af73f573e8ef7b9b609d27c2520773",
"f0b3a9e34ace1e564683eb764f7ff3366524bd57"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"74f50dd7f7ea66d80ae8f3b3a3d1682181f0a1f0"
],
"answer": [
{
"evidence": [
"DeepMine is publicly available for everybody with a variety of licenses for different users. It was collected using crowdsourcing BIBREF4. The data collection was done using an Android application. Each respondent installed the application on his/her personal device and recorded several phrases in different sessions. The Android application did various checks on each utterance and if it passed all of them, the respondent was directed to the next phrase. For more information about data collection scenario, please refer to BIBREF4."
],
"extractive_spans": [
"Android application"
],
"free_form_answer": "",
"highlighted_evidence": [
"It was collected using crowdsourcing BIBREF4. The data collection was done using an Android application."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"",
"",
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
"",
"",
""
],
"question": [
"who transcribed the corpus?",
"how was the speech collected?",
"what accents are present in the corpus?",
"what evaluation protocols are provided?",
"what age range is in the data?",
"what is the source of the data?"
],
"question_id": [
"ee2ad0ab64579cffb60853db6a8c0f971d7cf0ff",
"fde700d5134a9ae8f7579bea1f1b75f34d7c1c4c",
"b1a068c1050e2bed12d5c9550c73e59cd5b1f78d",
"f9edd8f9c13b54d8b1253ed30e7decc1999602da",
"d93c0e78a3fe890cd534a11276e934be68583f4b",
"30af1926559079f59b0df055da76a3a34df8336f"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
"",
"",
""
]
} | {
"caption": [
"Table 1. Statistics of the DeepMine database.",
"Table 2. Numbers of unique phrases in each phrase type. “3- months/4-digits” means a random subset of months/digits used as test phrase. “Persian name and family” means a sequence of several Persian full names.",
"Table 3. Number of speakers in different evaluation sets of textdependent part.",
"Table 4. Numbers of trials in each different evaluation set for all possible trial types in text-dependent SV for 3-sess enrollment. TC: Target-Correct, TW: Target-Wrong, IC: Imposter-Correct, IW: Imposter-Wrong. K stands for thousand and M stands for million trials.",
"Table 5. Numbers of trials in each different evaluation sets for all possible trial types in text-prompted SV for 1-sess enrollment.",
"Table 8. Trial types in text-dependent speaker verification [3].",
"Table 6. Different evaluation sets of text-independent part of the DeepMine database.",
"Table 7. Numbers of trials in each different evaluation sets for all possible trial types in text-dependent SV for 3-sess enrollment. “cross” means enrollment using Persian utterances and test using 5 English utterances. K stands for thousand and M stands for million trials.",
"Table 9. i-vector/HMM results on text-dependent part (i.e. Part1) on 100-spk, 3-sess setup of Persian language. The results are only for correct trials (i.e. Target-Correct vs Imposter-Corrects). The first column shows filtering of the trials in target:imposter format. The first Y letter on each side of the colon shows the condition where the same mobile brand and the second Y shows exactly the same device models were used for recording the enrollment and test utterances.",
"Table 10. Factorized TDNN architecture for acoustic modeling using Kaldi toolkit [14]. Note that Batch-Norm applied after each ReLU is omitted for brevity.",
"Table 11. WER [%] obtained with different ASR systemss trained on DeepMine database. Test-set has 5.9 hours of speech and LargeTest-set contains 28.5 hours of speech."
],
"file": [
"2-Table1-1.png",
"2-Table2-1.png",
"2-Table3-1.png",
"3-Table4-1.png",
"3-Table5-1.png",
"4-Table8-1.png",
"4-Table6-1.png",
"4-Table7-1.png",
"5-Table9-1.png",
"5-Table10-1.png",
"6-Table11-1.png"
]
} | [
"how was the speech collected?"
] | [
[
"1912.03627-Data Collection-0"
]
] | [
"The speech was collected from respondents using an android application."
] | 174 |
1810.12085 | Extractive Summarization of EHR Discharge Notes | Patient summarization is essential for clinicians to provide coordinated care and practice effective communication. Automated summarization has the potential to save time, standardize notes, aid clinical decision making, and reduce medical errors. Here we provide an upper bound on extractive summarization of discharge notes and develop an LSTM model to sequentially label topics of history of present illness notes. We achieve an F1 score of 0.876, which indicates that this model can be employed to create a dataset for evaluation of extractive summarization methods. | {
"paragraphs": [
[
"Summarization of patient information is essential to the practice of medicine. Clinicians must synthesize information from diverse data sources to communicate with colleagues and provide coordinated care. Examples of clinical summarization are abundant in practice; patient handoff summaries facilitate provider shift change, progress notes provide a daily status update for a patient, oral case presentations enable transfer of information from overnight admission to the care team and attending, and discharge summaries provide information about a patient's hospital visit to their primary care physician and other outpatient providers BIBREF0 .",
"Informal, unstructured, or poor quality summaries can lead to communication failures and even medical errors, yet clinical instruction on how to formulate clinical summaries is ad hoc and informal. Non-existent or limited search functionality, fragmented data sources, and limited visualizations in electronic health records (EHRs) make summarization challenging for providers BIBREF1 , BIBREF2 , BIBREF3 . Furthermore, while dictation of EHR notes allows clinicians to more efficiently document information at the point of care, the stream of consciousness-like writing can hinder the readability of notes. Kripalani et al. show that discharge summaries are often lacking key information, including treatment progression and follow-up protocols, which can hinder communication between hospital and community based clinicians BIBREF4 . Recently, St. Thomas Hospital in Nashville, TN stipulated that discharge notes be written within 48 hours of discharge following incidences where improper care was given to readmitted patients because the discharge summary for the previous admission was not completed BIBREF5 .",
"Automated summary generation has the potential to save clinician time, avoid medical errors, and aid clinical decision making. By organizing and synthesizing a patient's salient medical history, algorithms for patient summarization can enable better communication and care, particularly for chronically ill patients, whose medical records often contain hundreds of notes. In this work, we explore the automatic summarization of discharge summary notes, which are critical to ensuring continuity of care after hospitalization. We (1) provide an upper bound on extractive summarization by assessing how much information in the discharge note can be found in the rest of the patient's EHR notes and (2) develop a classifier for labeling the topics of history of present illness notes, a narrative section in the discharge summary that describes the patient's prior history and current symptoms. Such a classifier can be used to create topic specific evaluation sets for methods that perform extractive summarization. These aims are critical steps in ultimately developing methods that can automate discharge summary creation."
],
[
"In the broader field of summarization, automization was meant to standardize output while also saving time and effort. Pioneering strategies in summarization started by extracting \"significant\" sentences in the whole corpus to build an abstract where \"significant\" sentences were defined by the number of frequently occurring words BIBREF6 . These initial methods did not consider word meaning or syntax at either the sentence or paragraph level, which made them crude at best. More advanced extractive heuristics like topic modeling BIBREF7 , cue word dictionary approaches BIBREF8 , and title methods BIBREF9 for scoring content in a sentence followed soon after. For example, topic modeling extends initial frequency methods by assigning topics scores by frequency of topic signatures, clustering sentences with similar topics, and finally extracting the centroid sentence, which is considered the most representative sentence BIBREF10 . Recently, abstractive summarization approaches using sequence-to-sequence methods have been developed to generate new text that synthesizes original text BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 ; however, the field of abstractive summarization is quite young.",
"Existing approaches within the field of electronic health record summarization have largely been extractive and indicative, meaning that summaries point to important pieces in the original text rather than replacing the original text altogether. Few approaches have been deployed in practice, and even fewer have demonstrated impact on quality of care and outcomes BIBREF15 . Summarization strategies have ranged from extraction of “relevant” sentences from the original text to form the summary BIBREF16 , topic modeling of EHR notes using Latent Dirichlet allocation (LDA) or bayesian networks BIBREF15 , and knowledge based heuristic systems BIBREF17 . To our knowledge, there is no literature to date on extractive or abstractive EHR summarization using neural networks."
],
[
"MIMIC-III is a freely available, deidentified database containing electronic health records of patients admitted to an Intensive Care Unit (ICU) at Beth Israel Deaconess Medical Center between 2001 and 2012. The database contains all of the notes associated with each patient's time spent in the ICU as well as 55,177 discharge reports and 4,475 discharge addendums for 41,127 distinct patients. Only the original discharge reports were included in our analyses. Each discharge summary was divided into sections (Date of Birth, Sex, Chief Complaint, Major Surgical or Invasive Procedure, History of Present Illness, etc.) using a regular expression."
],
[
"Extractive summarization of discharge summaries relies on the assumption that the information in the discharge summary is documented elsewhere in the rest of the patient's notes. However, sometimes clinicians will document information in the discharge summary that may have been discussed throughout the hospital visit, but was never documented in the EHR. Thus, our first aim was to determine the upper bound of extractive summarization.",
"For each patient, we compared the text of the discharge summary to the remaining notes for the patient's current admission as well as their entire medical record. Concept Unique Identifiers (CUIs) from the Unified Medical Language System (UMLS) were compared in order to assess whether clinically relevant concepts in the discharge summary could be located in the remaining notes BIBREF18 . CUIs were extracted using Apache cTAKES BIBREF19 and filtered by removing the CUIs that are already subsumed by a longer spanning CUI. For example, CUIs for \"head\" and \"ache\" were removed if a CUI existed for \"head ache\" in order to extract the most clinically relevant CUIs.",
"In order to understand which sections of the discharge summaries would be the easiest or most difficult to summarize, we performed the same CUI overlap comparison for the chief complaint, major surgical or invasive procedure, discharge medication, and history of present illness sections of the discharge note separately. We calculated which fraction of the CUIs in each section were located in the rest of the patient's note for a specific hospital stay. We also calculated what percent of the genders recorded in the discharge summary were also recorded in the structured data for the patient.",
"For each of the 55,177 discharge summary reports in the MIMIC database, we calculated what fraction of the CUIs in the discharge summary could be found in the remaining notes for the patient's current admission ( INLINEFORM0 ) and in their entire longitudinal medical record ( INLINEFORM1 ). Table TABREF13 shows the CUI recall averaged across all discharge summaries by both subject_id and hadm_id. The low recall suggests that clinicians may incorporate information in the discharge note that had not been previously documented in the EHR. Figure FIGREF11 plots the relationship between the number of non-discharge notes for each patient and the CUI recall (top) and the number of total CUIs in non-discharge notes and the CUI recall (middle). The number of CUIs is a proxy for the length of the notes, and as expected, the CUI recall tends to be higher in patients with more and longer notes. The bottom panel in Figure FIGREF11 demonstrates that recall is not correlated with the patient's length of stay outside the ICU, which indicates that our upper bound calculation is not severely impacted by only having access to the patient's notes from their stay in the ICU.",
"Finally, Table TABREF14 shows the recall for the sex, chief complaint, procedure, discharge medication, and HPI discharge summary sections averaged across all the discharge summaries. The procedure section has the highest recall of 0.807, which is understandable because procedures undergone during an inpatient stay are most likely to be documented in an EHR. The recall for each of these five sections is much higher than the overall recall in Table TABREF13 , suggesting that extractive summarization may be easier for some sections of the discharge note.",
"Overall, this upper bound analysis suggests that we may not be able to recreate a discharge summary with extractive summarization alone. While CUI comparison allows for comparing medically relevant concepts, cTAKES's CUI labelling process is not perfect, and further work, perhaps through sophisticated regular expressions, is needed to define the limits of extractive summarization."
],
[
"We developed a classifier to label topics in the history of present illness (HPI) notes, including demographics, diagnosis history, and symptoms/signs, among others. A random sample of 515 history of present illness notes was taken, and each of the notes was manually annotated by one of eight annotators using the software Multi-document Annotation Environment (MAE) BIBREF20 . MAE provides an interactive GUI for annotators and exports the results of each annotation as an XML file with text spans and their associated labels for additional processing. 40% of the HPI notes were labeled by clinicians and 60% by non-clinicians. Table TABREF5 shows the instructions given to the annotators for each of the 10 labels. The entire HPI note was labeled with one of the labels, and instructions were given to label each clause in a sentence with the same label when possible.",
"Our LSTM model was adopted from prior work by Dernoncourt et al BIBREF21 . Whereas the Dernoncourt model jointly classified each sentence in a medical abstract, here we jointly classify each word in the HPI summary. Our model consists of four layers: a token embedding layer, a word contextual representation layer, a label scoring layer, and a label sequence optimization layer (Figure FIGREF9 ).",
"In the following descriptions, lowercase italics is used to denote scalars, lowercase bold is used to denote vectors, and uppercase italics is used to denote matrices.",
"Token Embedding Layer: In the token embedding layer, pretrained word embeddings are combined with learned character embeddings to create a hybrid token embedding for each word in the HPI note. The word embeddings, which are direct mappings from word INLINEFORM0 to vector INLINEFORM1 , were pretrained using word2vec BIBREF22 , BIBREF23 , BIBREF24 on all of the notes in MIMIC (v30) and only the discharge notes. Both the continuous bag of words (CBOW) and skip gram models were explored.",
"Let INLINEFORM0 be the sequence of characters comprising the word INLINEFORM1 . Each character is mapped to its embedding INLINEFORM2 , and all embeddings are input into a bidirectional LSTM, which ultimately outputs INLINEFORM3 , the character embedding of the word INLINEFORM4 .",
"The output of the token embedding layer is the vector e, which is the result of concatenation of the word embedding, t, and the character embedding, c.",
"Contextual Representation Layer: The contextual representation layer takes as input the sequence of word embeddings, INLINEFORM0 , and outputs an embedding of the contextual representation for each word in the HPI note. The word embeddings are fed into a bi-directional LSTM, which outputs INLINEFORM1 , a concatenation of the hidden states of the two LSTMs for each word.",
"Label Scoring Layer: At this point, each word INLINEFORM0 is associated with a hidden representation of the word, INLINEFORM1 . In the label scoring layer, we use a fully connected neural network with one hidden layer to output a score associated with each of the 10 categories for each word. Let INLINEFORM2 and INLINEFORM3 . We can compute a vector of scores s = INLINEFORM4 where the ith component of s is the score of class i for a given word.",
"Label Sequence Optimization Layer: The Label Sequence Optimization Layer computes the probability of a labeling sequence and finds the sequence with the highest probability. In order to condition the label for each word on the labels of its neighbors, we employ a linear chain conditional random field (CRF) to define a global score, INLINEFORM0 , for a sequence of words and their associated scores INLINEFORM1 and labels, INLINEFORM2 : DISPLAYFORM0 ",
"where T is a transition matrix INLINEFORM0 INLINEFORM1 and INLINEFORM2 are vectors of scores that describe the cost of beginning or ending with a label.",
"The probability of a sequence of labels is calculated by applying a softmax layer to obtain a probability of a sequence of labels: DISPLAYFORM0 ",
"Cross-entropy loss, INLINEFORM0 , is used as the objective function where INLINEFORM1 is the correct sequence of labels and the probability INLINEFORM2 is calculated according to the CRF.",
"We evaluated our model on the 515 annotated history of present illness notes, which were split in a 70% train set, 15% development set, and a 15% test set. The model is trained using the Adam algorithm for gradient-based optimization BIBREF25 with an initial learning rate = 0.001 and decay = 0.9. A dropout rate of 0.5 was applied for regularization, and each batch size = 20. The model ran for 20 epochs and was halted early if there was no improvement after 3 epochs.",
"We evaluated the impact of character embeddings, the choice of pretrained w2v embeddings, and the addition of learned word embeddings on model performance on the dev set. We report performance of the best performing model on the test set.",
"Table TABREF16 compares dev set performance of the model using various pretrained word embeddings, with and without character embeddings, and with pretrained versus learned word embeddings. The first row in each section is the performance of the model architecture described in the methods section for comparison. Models using word embeddings trained on the discharge summaries performed better than word embeddings trained on all MIMIC notes, likely because the discharge summary word embeddings better captured word use in discharge summaries alone. Interestingly, the continuous bag of words embeddings outperformed skip gram embeddings, which is surprising because the skip gram architecture typically works better for infrequent words BIBREF26 . As expected, inclusion of character embeddings increases performance by approximately 3%. The model with word embeddings learned in the model achieves the highest performance on the dev set (0.886), which may be because the pretrained worm embeddings were trained on a previous version of MIMIC. As a result, some words in the discharge summaries, such as mi-spelled words or rarer diseases and medications, did not have associated word embeddings. Performing a simple spell correction on out of vocab words may improve performance with pretrained word embeddings.",
"We evaluated the best performing model on the test set. The Learned Word Embeddings model achieved an accuracy of 0.88 and an F1-Score of 0.876 on the test set. Table TABREF17 shows the precision, recall, F1 score, and support for each of the ten labels, and Figure FIGREF18 shows the confusion matrix illustrating which labels were frequently misclassified. The demographics and patient movement labels achieved the highest F1 scores (0.96 and 0.93 respectively) while the vitals/labs and medication history labels had the lowest F1 scores (0.40 and 0.66 respectively). The demographics section consistently occurs at the beginning of the HPI note, and the patient movement section uses a limited vocab (transferred, admitted, etc.), which may explain their high F1 scores. On the other hand, the vitals/labs and medication history sections had the lowest support, which may explain why they were more challenging to label.",
"Words that belonged to the diagnosis history, patient movement, and procedure/results sections were frequently labeled as symptoms/signs (Figure FIGREF18 ). Diagnosis history sections may be labeled frequently as symptoms/signs because symptoms/diseases can either be described as part of the patient's diagnosis history or current symptoms depending on when the symptom/disease occurred. However, many of the misclassification errors may be due to inconsistency in manual labelling among annotators. For example, sentences describing both patient movement and patient symptoms (e.g. \"the patient was transferred to the hospital for his hypertension\") were labeled entirely as 'patient movement' by some annotators while other annotators labeled the different clauses of the sentence separately as 'patient movement' and 'symptoms/signs.' Further standardization among annotators is needed to avoid these misclassifications. Future work is needed to obtain additional manual annotations where each HPI note is annotated by multiple annotators. This will allow for calculation of Cohen's kappa, which measures inter-annotator agreement, and comparison of clinician and non-clinician annotator reliability.",
"Future work is also needed to better understand commonly mislabeled categories and explore alternative model architectures. Here we perform word level label prediction, which can result in phrases that contain multiple labels. For example, the phrase \"history of neck pain\" can be labeled with both 'diagnosis history' and 'symptoms/signs' labels. Post-processing is needed to create a final label prediction for each phrase. While phrase level prediction may resolve these challenges, it is difficult to segment the HPI note into phrases for prediction, as a single phrase may truly contain multiple labels. Segmentation of sentences by punctuation, conjunctions, and prepositions may yield the best phrase chunker for discharge summary text.",
"Finally, supplementing the word embeddings in our LSTM model with CUIs may further improve performance. While word embeddings do well in learning the contextual context of words, CUIs allow for more explicit incorporation of medical domain expertise. By concatenating the CUI for each word with its hybrid token embedding, we may be able to leverage both data driven and ontology driven approaches."
],
[
"In this paper we developed a CUI-based upper bound on extractive summarization of discharge summaries and presented a NN architecture that jointly classifies words in history of present illness notes. We demonstrate that our model can achieve excellent performance on a small dataset with known heterogeneity among annotators. This model can be applied to the 55,000 discharge summaries in MIMIC to create a dataset for evaluation of extractive summarization methods."
],
[
"We would like to thank our annotators, Andrew Goldberg, Laurie Alsentzer, Elaine Goldberg, Andy Alsentzer, Grace Lo, and Josh Donis. We would also like to acknowledge Pete Szolovits for his guidance and for providing the pretrained word embeddings and Tristan Naumann for providing the MIMIC CUIs."
]
],
"section_name": [
"Introduction",
"Related Work",
"Data",
"Upper Bound on Summarization",
"Labeling History of Present Illness Notes",
"Conclusion",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"51e0f375ad3f081901134ee143e15c0e37e89707",
"80ed819441ed8cf2ca4f3aee683adc345c8184c6"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 1. HPI Categories and Annotation Instructions"
],
"extractive_spans": [],
"free_form_answer": "Demographics Age, DiagnosisHistory, MedicationHistory, ProcedureHistory, Symptoms/Signs, Vitals/Labs, Procedures/Results, Meds/Treatments, Movement, Other.",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1. HPI Categories and Annotation Instructions"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We developed a classifier to label topics in the history of present illness (HPI) notes, including demographics, diagnosis history, and symptoms/signs, among others. A random sample of 515 history of present illness notes was taken, and each of the notes was manually annotated by one of eight annotators using the software Multi-document Annotation Environment (MAE) BIBREF20 . MAE provides an interactive GUI for annotators and exports the results of each annotation as an XML file with text spans and their associated labels for additional processing. 40% of the HPI notes were labeled by clinicians and 60% by non-clinicians. Table TABREF5 shows the instructions given to the annotators for each of the 10 labels. The entire HPI note was labeled with one of the labels, and instructions were given to label each clause in a sentence with the same label when possible."
],
"extractive_spans": [],
"free_form_answer": "Demographics, Diagnosis History, Medication History, Procedure History, Symptoms, Labs, Procedures, Treatments, Hospital movements, and others",
"highlighted_evidence": [
"We developed a classifier to label topics in the history of present illness (HPI) notes, including demographics, diagnosis history, and symptoms/signs, among others",
"Table TABREF5 shows the instructions given to the annotators for each of the 10 labels."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
]
},
{
"annotation_id": [
"e106fce0251a06f92ef37403e6e14546f8296547"
],
"answer": [
{
"evidence": [
"We evaluated our model on the 515 annotated history of present illness notes, which were split in a 70% train set, 15% development set, and a 15% test set. The model is trained using the Adam algorithm for gradient-based optimization BIBREF25 with an initial learning rate = 0.001 and decay = 0.9. A dropout rate of 0.5 was applied for regularization, and each batch size = 20. The model ran for 20 epochs and was halted early if there was no improvement after 3 epochs.",
"We evaluated the impact of character embeddings, the choice of pretrained w2v embeddings, and the addition of learned word embeddings on model performance on the dev set. We report performance of the best performing model on the test set.",
"Table TABREF16 compares dev set performance of the model using various pretrained word embeddings, with and without character embeddings, and with pretrained versus learned word embeddings. The first row in each section is the performance of the model architecture described in the methods section for comparison. Models using word embeddings trained on the discharge summaries performed better than word embeddings trained on all MIMIC notes, likely because the discharge summary word embeddings better captured word use in discharge summaries alone. Interestingly, the continuous bag of words embeddings outperformed skip gram embeddings, which is surprising because the skip gram architecture typically works better for infrequent words BIBREF26 . As expected, inclusion of character embeddings increases performance by approximately 3%. The model with word embeddings learned in the model achieves the highest performance on the dev set (0.886), which may be because the pretrained worm embeddings were trained on a previous version of MIMIC. As a result, some words in the discharge summaries, such as mi-spelled words or rarer diseases and medications, did not have associated word embeddings. Performing a simple spell correction on out of vocab words may improve performance with pretrained word embeddings.",
"FLOAT SELECTED: Table 3. Average Recall for five sections of the discharge summary. Recall for each patient’s sex was calculated by examining the structured data for the patient’s current admission, and recall for the remaining sections was calculated by comparing CUI overlap between the section and the remaining notes for the current admission."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"We evaluated our model on the 515 annotated history of present illness notes, which were split in a 70% train set, 15% development set, and a 15% test set.",
"We evaluated the impact of character embeddings, the choice of pretrained w2v embeddings, and the addition of learned word embeddings on model performance on the dev set. We report performance of the best performing model on the test set.",
"Table TABREF16 compares dev set performance of the model using various pretrained word embeddings, with and without character embeddings, and with pretrained versus learned word embeddings.",
"FLOAT SELECTED: Table 3. Average Recall for five sections of the discharge summary. Recall for each patient’s sex was calculated by examining the structured data for the patient’s current admission, and recall for the remaining sections was calculated by comparing CUI overlap between the section and the remaining notes for the current admission."
],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
]
},
{
"annotation_id": [
"08da8e6994cdd6cf9a6a4c186d9ee3752b797873",
"c71e5a348e32b9fc071bde72facbe4bde04961a8"
],
"answer": [
{
"evidence": [
"MIMIC-III is a freely available, deidentified database containing electronic health records of patients admitted to an Intensive Care Unit (ICU) at Beth Israel Deaconess Medical Center between 2001 and 2012. The database contains all of the notes associated with each patient's time spent in the ICU as well as 55,177 discharge reports and 4,475 discharge addendums for 41,127 distinct patients. Only the original discharge reports were included in our analyses. Each discharge summary was divided into sections (Date of Birth, Sex, Chief Complaint, Major Surgical or Invasive Procedure, History of Present Illness, etc.) using a regular expression."
],
"extractive_spans": [
"MIMIC-III"
],
"free_form_answer": "",
"highlighted_evidence": [
"MIMIC-III is a freely available, deidentified database containing electronic health records of patients admitted to an Intensive Care Unit (ICU) at Beth Israel Deaconess Medical Center between 2001 and 2012"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"MIMIC-III is a freely available, deidentified database containing electronic health records of patients admitted to an Intensive Care Unit (ICU) at Beth Israel Deaconess Medical Center between 2001 and 2012. The database contains all of the notes associated with each patient's time spent in the ICU as well as 55,177 discharge reports and 4,475 discharge addendums for 41,127 distinct patients. Only the original discharge reports were included in our analyses. Each discharge summary was divided into sections (Date of Birth, Sex, Chief Complaint, Major Surgical or Invasive Procedure, History of Present Illness, etc.) using a regular expression."
],
"extractive_spans": [
"MIMIC-III"
],
"free_form_answer": "",
"highlighted_evidence": [
"MIMIC-III is a freely available, deidentified database containing electronic health records of patients admitted to an Intensive Care Unit (ICU) at Beth Israel Deaconess Medical Center between 2001 and 2012. The database contains all of the notes associated with each patient's time spent in the ICU as well as 55,177 discharge reports and 4,475 discharge addendums for 41,127 distinct patients. Only the original discharge reports were included in our analyses. Each discharge summary was divided into sections (Date of Birth, Sex, Chief Complaint, Major Surgical or Invasive Procedure, History of Present Illness, etc.) using a regular expression."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
]
}
],
"nlp_background": [
"",
"",
""
],
"paper_read": [
"",
"",
""
],
"question": [
"what topics did they label?",
"did they compare with other extractive summarization methods?",
"what datasets were used?"
],
"question_id": [
"ceb767e33fde4b927e730f893db5ece947ffb0d8",
"c2cb6c4500d9e02fc9a1bdffd22c3df69655189f",
"c571deefe93f0a41b60f9886db119947648e967c"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"",
"",
""
]
} | {
"caption": [
"Table 1. HPI Categories and Annotation Instructions",
"Figure 1. LSTM Model for sequential HPI word classification. Xi: token i, Z1 to Zl: characters of Xi, t: word embeddings, c: character embeddings, e1:k: hybrid token embeddings, h1:j : contextual representation embedings, s1:m",
"Figure 2. Relationship between the number of non-discharge notes (top), number of non-discharge CUIs (middle), or the number of hours the patient spent outside the ICU (bottom) and the recall for each discharge summary.",
"Table 3. Average Recall for five sections of the discharge summary. Recall for each patient’s sex was calculated by examining the structured data for the patient’s current admission, and recall for the remaining sections was calculated by comparing CUI overlap between the section and the remaining notes for the current admission.",
"Table 2. Discharge CUI recall for each patient’s current hospital admission (by hadm id) and their entire EHR (by subject id), averaged across all discharge summaries",
"Figure 3. Confusion Matrix for the Learned Word Embedding model on the test set. True labels are on the x-axis, and predicted labels are on the y-axis.",
"Table 4. F1-Scores on the development set under various model architectures. The first model in each section is the same model described in the methods section. Top performing models in each section are in bold."
],
"file": [
"3-Table1-1.png",
"4-Figure1-1.png",
"4-Figure2-1.png",
"5-Table3-1.png",
"5-Table2-1.png",
"6-Figure3-1.png",
"6-Table4-1.png"
]
} | [
"what topics did they label?"
] | [
[
"1810.12085-3-Table1-1.png",
"1810.12085-Labeling History of Present Illness Notes-0"
]
] | [
"Demographics, Diagnosis History, Medication History, Procedure History, Symptoms, Labs, Procedures, Treatments, Hospital movements, and others"
] | 175 |
1610.07809 | How Document Pre-processing affects Keyphrase Extraction Performance | The SemEval-2010 benchmark dataset has brought renewed attention to the task of automatic keyphrase extraction. This dataset is made up of scientific articles that were automatically converted from PDF format to plain text and thus require careful preprocessing so that irrevelant spans of text do not negatively affect keyphrase extraction performance. In previous work, a wide range of document preprocessing techniques were described but their impact on the overall performance of keyphrase extraction models is still unexplored. Here, we re-assess the performance of several keyphrase extraction models and measure their robustness against increasingly sophisticated levels of document preprocessing. | {
"paragraphs": [
[
" This work is licensed under a Creative Commons Attribution 4.0 International Licence. Licence details: http://creativecommons.org/licenses/by/4.0/",
"Recent years have seen a surge of interest in automatic keyphrase extraction, thanks to the availability of the SemEval-2010 benchmark dataset BIBREF0 . This dataset is composed of documents (scientific articles) that were automatically converted from PDF format to plain text. As a result, most documents contain irrelevant pieces of text (e.g. muddled sentences, tables, equations, footnotes) that require special handling, so as to not hinder the performance of keyphrase extraction systems. In previous work, these are usually removed at the preprocessing step, but using a variety of techniques ranging from simple heuristics BIBREF1 , BIBREF2 , BIBREF3 to sophisticated document logical structure detection on richly-formatted documents recovered from Google Scholar BIBREF4 . Under such conditions, it may prove difficult to draw firm conclusions about which keyphrase extraction model performs best, as the impact of preprocessing on overall performance cannot be properly quantified.",
"While previous work clearly states that efficient document preprocessing is a prerequisite for the extraction of high quality keyphrases, there is, to our best knowledge, no empirical evidence of how preprocessing affects keyphrase extraction performance. In this paper, we re-assess the performance of several state-of-the-art keyphrase extraction models at increasingly sophisticated levels of preprocessing. Three incremental levels of document preprocessing are experimented with: raw text, text cleaning through document logical structure detection, and removal of keyphrase sparse sections of the document. In doing so, we present the first consistent comparison of different keyphrase extraction models and study their robustness over noisy text. More precisely, our contributions are:"
],
[
"The SemEval-2010 benchmark dataset BIBREF0 is composed of 244 scientific articles collected from the ACM Digital Library (conference and workshop papers). The input papers ranged from 6 to 8 pages and were converted from PDF format to plain text using an off-the-shelf tool. The only preprocessing applied is a systematic dehyphenation at line breaks and removal of author-assigned keyphrases. Scientific articles were selected from four different research areas as defined in the ACM classification, and were equally distributed into training (144 articles) and test (100 articles) sets. Gold standard keyphrases are composed of both author-assigned keyphrases collected from the original PDF files and reader-assigned keyphrases provided by student annotators.",
"Long documents such as those in the SemEval-2010 benchmark dataset are notoriously difficult to handle due to the large number of keyphrase candidates (i.e. phrases that are eligible to be keyphrases) that the systems have to cope with BIBREF6 . Furthermore, noisy textual content, whether due to format conversion errors or to unusable elements (e.g. equations), yield many spurious keyphrase candidates that negatively affect keyphrase extraction performance. This is particularly true for systems that make use of core NLP tools to select candidates, that in turn exhibit poor performance on degraded text. Filtering out irrelevant text is therefore needed for addressing these issues.",
"In this study, we concentrate our effort on re-assessing keyphrase extraction performance on three increasingly sophisticated levels of document preprocessing described below.",
"Table shows the average number of sentences and words along with the maximum possible recall for each level of preprocessing. The maximum recall is obtained by computing the fraction of the reference keyphrases that occur in the documents. We observe that the level 2 preprocessing succeeds in eliminating irrelevant text by significantly reducing the number of words (-19%) while maintaining a high maximum recall (-2%). Level 3 preprocessing drastically reduce the number of words to less than a quarter of the original amount while interestingly still preserving high recall."
],
[
"We re-implemented five keyphrase extraction models : the first two are commonly used as baselines, the third is a resource-lean unsupervised graph-based ranking approach, and the last two were among the top performing systems in the SemEval-2010 keyphrase extraction task BIBREF0 . We note that two of the systems are supervised and rely on the training set to build their classification models. Document frequency counts are also computed on the training set. Stemming is applied to allow more robust matching. The different keyphrase extraction models are briefly described below:",
"Each model uses a distinct keyphrase candidate selection method that provides a trade-off between the highest attainable recall and the size of set of candidates. Table summarizes these numbers for each model. Syntax-based selection heuristics, as used by TopicRank and WINGNUS, are better suited to prune candidates that are unlikely to be keyphrases. As for KP-miner, removing infrequent candidates may seem rather blunt, but it turns out to be a simple yet effective pruning method when dealing with long documents. For details on how candidate selection methods affect keyphrase extraction, please refer to BIBREF16 .",
"Apart from TopicRank that groups similar candidates into topics, the other models do not have any redundancy control mechanism. Yet, recent work has shown that up to 12% of the overall error made by state-of-the-art keyphrase extraction systems were due to redundancy BIBREF6 , BIBREF17 . Therefore as a post-ranking step, we remove redundant keyphrases from the ranked lists generated by all models. A keyphrase is considered redundant if it is included in another keyphrase that is ranked higher in the list."
],
[
"We follow the evaluation procedure used in the SemEval-2010 competition and evaluate the performance of each model in terms of f-measure (F) at the top INLINEFORM0 keyphrases. We use the set of combined author- and reader-assigned keyphrases as reference keyphrases. Extracted and reference keyphrases are stemmed to reduce the number of mismatches."
],
[
"The performances of the keyphrase extraction models at each preprocessing level are shown in Table . Overall, we observe a significant increase of performance for all models at levels 3, confirming that document preprocessing plays an important role in keyphrase extraction performance. Also, the difference of f-score between the models, as measured by the standard deviation INLINEFORM0 , gradually decreases with the increasing level of preprocessing. This result strengthens the assumption made in this paper, that performance variation across models is partly a function of the effectiveness of document preprocessing.",
"Somewhat surprisingly, the ordering of the two best models reverses at level 3. This showcases that rankings are heavily influenced by the preprocessing stage, despite the common lack of details and analysis it suffers from in explanatory papers. We also remark that the top performing model, namely KP-Miner, is unsupervised which supports the findings of BIBREF6 indicating that recent unsupervised approaches have rivaled their supervised counterparts in performance.",
"In an attempt to quantify the performance variation across preprocessing levels, we compute the standard deviation INLINEFORM0 for each model. Here we see that unsupervised models are more sensitive to noisy input, as revealed by higher standard deviations. We found two main reasons for this. First, using multiple discriminative features to rank keyphrase candidates adds inherent robustness to the models. Second, the supervision signal helps models to disregard noise.",
"In Table , we compare the outputs of the five models by measuring the percentage of valid keyphrases that are retrieved by all models at once for each preprocessing level. By these additional results, we aim to assess whether document preprocessing smoothes differences between models. We observe that the overlap between the outputs of the different models increases along with the level of preprocessing. This suggests that document preprocessing reduces the effect that the keyphrase extraction model in itself has on overall performance. In other words, the singularity of each model fades away gradually with increase in preprocessing effort."
],
[
"Being able to reproduce experimental results is a central aspect of the scientific method. While assessing the importance of the preprocessing stage for five approaches, we found that several results were not reproducible, as shown in Table .",
"We note that the trends for baselines and high ranking systems are opposite: compared to the published results, our reproduction of top systems under-performs and our reproduction of baselines over-performs. We hypothesise that this stems from a difference in hyperparameter tuning, including the ones that the preprocessing stage makes implicit. Competitors have strong incentives to correctly optimize hyperparameters, to achieve a high ranking and get more publicity for their work while competition organizers might have the opposite incentive: too strong a baseline might not be considered a baseline anymore.",
"We also observe that with this leveled preprocessing, the gap between baselines and top systems is much smaller, lessening again the importance of raw scores and rankings to interpret the shared task results and emphasizing the importance of understanding correctly the preprocessing stage."
],
[
"In the previous sections, we provided empirical evidence that document preprocessing weighs heavily on the outcome of keyphrase extraction models. This raises the question of whether further improvement might be gained from a more aggressive preprocessing. To answer this question, we take another step beyond content filtering and further abridge the input text from level 3 preprocessed documents using an unsupervised summarization technique. More specifically, we keep the title and abstract intact, as they are the two most keyphrase dense parts within scientific articles BIBREF4 , and select only the most content bearing sentences from the remaining contents. To do this, sentences are ordered using TextRank BIBREF14 and the less informative ones, as determined by their TextRank scores normalized by their lengths in words, are filtered out.",
"Finding the optimal subset of sentences from already shortened documents is however no trivial task as maximum recall linearly decreases with the number of sentences discarded. Here, we simply set the reduction ratio to 0.865 so that the average maximum recall on the training set does not lose more than 5%. Table shows the reduction in the average number of sentences and words compared to level 3 preprocessing.",
"The performances of the keyphrase extraction models at level 4 preprocessing are shown in Table . We note that two models, namely TopicRank and TF INLINEFORM0 IDF, lose on performance. These two models mainly rely on frequency counts to rank keyphrase candidates, which in turn become less reliable at level 4 because of the very short length of the documents. Other models however have their f-scores once again increased, thus indicating that further improvement is possible from more reductive document preprocessing strategies."
],
[
"In this study, we re-assessed the performance of several keyphrase extraction models and showed that performance variation across models is partly a function of the effectiveness of the document preprocessing. Our results also suggest that supervised keyphrase extraction models are more robust to noisy input.",
"Given our findings, we recommend that future works use a common preprocessing to evaluate the interest of keyphrase extraction approaches. For this reason we make the four levels of preprocessing used in this study available for the community."
]
],
"section_name": [
"Introduction",
"Dataset and Preprocessing",
"Keyphrase Extraction Models",
"Experimental settings",
"Results",
"Reproducibility",
"Additional experiments",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"09469a48cae3f124239591fdbb8585169d147013",
"1493b5fd78d45336446cd92f12085d8cf1fa7baf"
],
"answer": [
{
"evidence": [
"While previous work clearly states that efficient document preprocessing is a prerequisite for the extraction of high quality keyphrases, there is, to our best knowledge, no empirical evidence of how preprocessing affects keyphrase extraction performance. In this paper, we re-assess the performance of several state-of-the-art keyphrase extraction models at increasingly sophisticated levels of preprocessing. Three incremental levels of document preprocessing are experimented with: raw text, text cleaning through document logical structure detection, and removal of keyphrase sparse sections of the document. In doing so, we present the first consistent comparison of different keyphrase extraction models and study their robustness over noisy text. More precisely, our contributions are:"
],
"extractive_spans": [
"raw text",
"text cleaning through document logical structure detection",
"removal of keyphrase sparse sections of the document"
],
"free_form_answer": "",
"highlighted_evidence": [
"Three incremental levels of document preprocessing are experimented with: raw text, text cleaning through document logical structure detection, and removal of keyphrase sparse sections of the document. In doing so, we present the first consistent comparison of different keyphrase extraction models and study their robustness over noisy text."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In this study, we concentrate our effort on re-assessing keyphrase extraction performance on three increasingly sophisticated levels of document preprocessing described below.",
"Table shows the average number of sentences and words along with the maximum possible recall for each level of preprocessing. The maximum recall is obtained by computing the fraction of the reference keyphrases that occur in the documents. We observe that the level 2 preprocessing succeeds in eliminating irrelevant text by significantly reducing the number of words (-19%) while maintaining a high maximum recall (-2%). Level 3 preprocessing drastically reduce the number of words to less than a quarter of the original amount while interestingly still preserving high recall.",
"FLOAT SELECTED: Table 1: Statistics computed at the different levels of document preprocessing on the training set."
],
"extractive_spans": [],
"free_form_answer": "Level 1, Level 2 and Level 3.",
"highlighted_evidence": [
"In this study, we concentrate our effort on re-assessing keyphrase extraction performance on three increasingly sophisticated levels of document preprocessing described below.",
"Table shows the average number of sentences and words along with the maximum possible recall for each level of preprocessing. The maximum recall is obtained by computing the fraction of the reference keyphrases that occur in the documents. We observe that the level 2 preprocessing succeeds in eliminating irrelevant text by significantly reducing the number of words (-19%) while maintaining a high maximum recall (-2%). Level 3 preprocessing drastically reduce the number of words to less than a quarter of the original amount while interestingly still preserving high recall.",
"FLOAT SELECTED: Table 1: Statistics computed at the different levels of document preprocessing on the training set."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"ee6ddbbbecba446d0c4b929a5b10bdc8acf8fe20"
],
"answer": [
{
"evidence": [
"In this study, we concentrate our effort on re-assessing keyphrase extraction performance on three increasingly sophisticated levels of document preprocessing described below."
],
"extractive_spans": [],
"free_form_answer": "Answer with content missing: (LVL1, LVL2, LVL3) \n- Stanford CoreNLP\n- Optical Character Recognition (OCR) system, ParsCIT \n- further abridge the input text from level 2 preprocessed documents to the following: title, headers, abstract, introduction, related work, background and conclusion.",
"highlighted_evidence": [
"In this study, we concentrate our effort on re-assessing keyphrase extraction performance on three increasingly sophisticated levels of document preprocessing described below."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"1284026cc7bf5e98b94327f636db27b27d0b71ec",
"bc58acd4e20ad5f5981e3e37789586b15f014511"
],
"answer": [
{
"evidence": [
"The SemEval-2010 benchmark dataset BIBREF0 is composed of 244 scientific articles collected from the ACM Digital Library (conference and workshop papers). The input papers ranged from 6 to 8 pages and were converted from PDF format to plain text using an off-the-shelf tool. The only preprocessing applied is a systematic dehyphenation at line breaks and removal of author-assigned keyphrases. Scientific articles were selected from four different research areas as defined in the ACM classification, and were equally distributed into training (144 articles) and test (100 articles) sets. Gold standard keyphrases are composed of both author-assigned keyphrases collected from the original PDF files and reader-assigned keyphrases provided by student annotators."
],
"extractive_spans": [
"244"
],
"free_form_answer": "",
"highlighted_evidence": [
"The SemEval-2010 benchmark dataset BIBREF0 is composed of 244 scientific articles collected from the ACM Digital Library (conference and workshop papers)."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The SemEval-2010 benchmark dataset BIBREF0 is composed of 244 scientific articles collected from the ACM Digital Library (conference and workshop papers). The input papers ranged from 6 to 8 pages and were converted from PDF format to plain text using an off-the-shelf tool. The only preprocessing applied is a systematic dehyphenation at line breaks and removal of author-assigned keyphrases. Scientific articles were selected from four different research areas as defined in the ACM classification, and were equally distributed into training (144 articles) and test (100 articles) sets. Gold standard keyphrases are composed of both author-assigned keyphrases collected from the original PDF files and reader-assigned keyphrases provided by student annotators."
],
"extractive_spans": [
"244 "
],
"free_form_answer": "",
"highlighted_evidence": [
"The SemEval-2010 benchmark dataset BIBREF0 is composed of 244 scientific articles collected from the ACM Digital Library (conference and workshop papers). "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
],
"nlp_background": [
"",
"",
""
],
"paper_read": [
"",
"",
""
],
"question": [
"what levels of document preprocessing are looked at?",
"what keyphrase extraction models were reassessed?",
"how many articles are in the dataset?"
],
"question_id": [
"06eb9f2320451df83e27362c22eb02f4a426a018",
"e54257585cc75564341eb02bdc63ff8111992f82",
"2a3e36c220e7b47c1b652511a4fdd7238a74a68f"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"",
"",
""
]
} | {
"caption": [
"Table 1: Statistics computed at the different levels of document preprocessing on the training set.",
"Table 2: Maximum recall and average number of keyphrase candidates for each model.",
"Table 3: F-scores computed at the top 10 extracted keyphrases for the unsupervised (Unsup.) and supervised (Sup.) models at each preprocessing level. We also report the standard deviation across the five models for each level (σ1) and the standard deviation across the three levels for each model (σ2). α and β indicate significance at the 0.05 level using Student’s t-test against level 1 and level 2 respectively.",
"Table 5: Difference in f-score between our re-implementation and the original scores reported in (Hasan and Ng, 2014; Bougouin et al., 2013).",
"Table 6: Statistics computed at level 4 of document preprocessing on the training set. We also report the relative difference with respect to level 3 preprocessing.",
"Table 7: F-scores computed at the top 10 extracted keyphrases for the unsupervised (Unsup.) and supervised (Sup.) models at level 4 preprocessing. We also report the difference in f-score with level 3 preprocessing."
],
"file": [
"3-Table1-1.png",
"4-Table2-1.png",
"5-Table3-1.png",
"5-Table5-1.png",
"6-Table6-1.png",
"6-Table7-1.png"
]
} | [
"what levels of document preprocessing are looked at?",
"what keyphrase extraction models were reassessed?"
] | [
[
"1610.07809-Dataset and Preprocessing-3",
"1610.07809-Introduction-2",
"1610.07809-3-Table1-1.png",
"1610.07809-Dataset and Preprocessing-2"
],
[
"1610.07809-Dataset and Preprocessing-2"
]
] | [
"Level 1, Level 2 and Level 3.",
"Answer with content missing: (LVL1, LVL2, LVL3) \n- Stanford CoreNLP\n- Optical Character Recognition (OCR) system, ParsCIT \n- further abridge the input text from level 2 preprocessed documents to the following: title, headers, abstract, introduction, related work, background and conclusion."
] | 176 |
2003.03044 | A Corpus for Detecting High-Context Medical Conditions in Intensive Care Patient Notes Focusing on Frequently Readmitted Patients | A crucial step within secondary analysis of electronic health records (EHRs) is to identify the patient cohort under investigation. While EHRs contain medical billing codes that aim to represent the conditions and treatments patients may have, much of the information is only present in the patient notes. Therefore, it is critical to develop robust algorithms to infer patients' conditions and treatments from their written notes. In this paper, we introduce a dataset for patient phenotyping, a task that is defined as the identification of whether a patient has a given medical condition (also referred to as clinical indication or phenotype) based on their patient note. Nursing Progress Notes and Discharge Summaries from the Intensive Care Unit of a large tertiary care hospital were manually annotated for the presence of several high-context phenotypes relevant to treatment and risk of re-hospitalization. This dataset contains 1102 Discharge Summaries and 1000 Nursing Progress Notes. Each Discharge Summary and Progress Note has been annotated by at least two expert human annotators (one clinical researcher and one resident physician). Annotated phenotypes include treatment non-adherence, chronic pain, advanced/metastatic cancer, as well as 10 other phenotypes. This dataset can be utilized for academic and industrial research in medicine and computer science, particularly within the field of medical natural language processing. | {
"paragraphs": [
[
"With the widespread adoption of electronic health records (EHRs), medical data are being generated and stored digitally in vast quantities BIBREF0. While much EHR data are structured and amenable to analysis, there appears to be limited homogeneity in data completeness and quality BIBREF1, and it is estimated that the majority of healthcare data are being generated in unstructured, text-based format BIBREF2. The generation and storage of these unstructured data come concurrently with policy initiatives that seek to utilize preventative measures to reduce hospital admission and readmission BIBREF3.",
"Chronic illnesses, behavioral factors, and social determinants of health are known to be associated with higher risks of hospital readmission, BIBREF4 and though behavioral factors and social determinants of health are often determined at the point of care, their identification may not always be curated in structured format within the EHR in the same manner that other factors associated with routine patient history taking and physical examination are BIBREF5. Identifying these patient attributes within EHRs in a reliable manner has the potential to reveal actionable associations which otherwise may remain poorly defined.",
"As EHRs act to streamline the healthcare administration process, much of the data collected and stored in structured format may be those data most relevant to reimbursement and billing, and may not necessarily be those data which were most relevant during the clinical encounter. For example, a diabetic patient who does not adhere to an insulin treatment regimen and who thereafter presents to the hospital with symptoms indicating diabetic ketoacidosis (DKA) will be treated and considered administratively as an individual presenting with DKA, though that medical emergency may have been secondary to non-adherence to the initial treatment regimen in the setting of diabetes. In this instance, any retrospective study analyzing only the structured data from many similarly selected clinical encounters will necessarily then underestimate the effect of treatment non-adherence with respect to hospital admissions.",
"While this form of high context information may not be found in the structured EHR data, it may be accessible in patient notes, including nursing progress notes and discharge summaries, particularly through the utilization of natural language processing (NLP) technologies. BIBREF6, BIBREF7 Given progress in NLP methods, we sought to address the issue of unstructured clinical text by defining and annotating clinical phenotypes in text which may otherwise be prohibitively difficult to discern in the structured data associated with the text entry. For this task, we chose the notes present in the publicly available MIMIC database BIBREF8.",
"Given the MIMIC database as substrate and the aforementioned policy initiatives to reduce unnecessary hospital readmissions, as well as the goal of providing structure to text, we elected to focus on patients who were frequently readmitted to the ICU BIBREF9. In particular, a patient who is admitted to the ICU more than three times in a single year. By defining our cohort in this way we sought to ensure we were able to capture those characteristics unique to the cohort in a manner which may yield actionable intelligence on interventions to assist this patient population."
],
[
"We have created a dataset of discharge summaries and nursing notes, all in the English language, with a focus on frequently readmitted patients, labeled with 15 clinical patient phenotypes believed to be associated with risk of recurrent Intensive Care Unit (ICU) readmission per our domain experts (co-authors LAC, PAT, DAG) as well as the literature. BIBREF10 BIBREF11 BIBREF12",
"Each entry in this database of consists of a Subject Identifier (integer), a Hospital Admission Identifier (integer), Category (string), Text (string), 15 Phenotypes (binary) including “None” and “Unsure”, Batch Date (string), and Operators (string). These variables are sufficient to use the data set alone, or to join it to the MIMIC-III database by Subject Identifier or Hospital Admission Identifier for additional patient-level or admission-level data, respectively. The MIMIC database BIBREF8 was utilized to extract Subject Identifiers, Hospital Admission Identifiers, and Note Text.",
"Annotated discharge summaries had a median token count of 1417.50 (Q1-Q3: 1046.75 - 1926.00) with a vocabulary of 26454 unique tokens, while nursing notes had a median count of 208 (Q1-Q3: 120 - 312) with a vocabulary of 12865 unique tokens.",
"Table defines each of the considered clinical patient phenotypes. Table counts the occurrences of these phenotypes across patient notes and Figure contains the corresponding correlation matrix. Lastly, Table presents an overview of some descriptive statistics on the patient notes' lengths."
],
[
"Clinical researchers teamed with junior medical residents in collaboration with more senior intensive care physicians to carry out text annotation over the period of one year BIBREF13. Operators were grouped to facilitate the annotation of notes in duplicate, allowing for cases of disagreement between operators. The operators within each team were instructed to work independently on note annotation. Clinical texts were annotated in batches which were time-stamped on their day of creation, when both operators in a team completed annotation of a batch, a new batch was created and transferred to them.",
"Two groups (group 1: co-authors ETM & JTW; group 2: co-authors JW & JF) of two operator pairs of one clinical researcher and one resident physician (who had previously taken the MCAT®) first annotated nursing notes and then discharge summaries. Everyone was first trained on the high-context phenotypes to look for as well as their definitions by going through a number of notes in a group. A total of 13 phenotypes were considered for annotation, and the label “unsure” was used to indicate that an operator would like to seek assistance determining the presence of an phenotype from a more senior physician. Annotations for phenotypes required explicit text in the note indicating the phenotype, but as a result of the complexity of certain phenotypes there was no specific dictionary of terms, or order in which the terms appeared, required for a phenotype to be considered present."
],
[
"There exist a few limitations to this database. These data are unique to Beth Israel Deaconess Medical Center (BIDMC), and models resulting from these data may not generalize to notes generated at other hospitals. Admissions to hospitals not associated with BIDMC will not have been captured, and generalizability is limited due to the limited geographic distribution of patients which present to the hospital.",
"We welcome opportunities to continue to expand this dataset with additional phenotypes sought in the unstructured text, patient subsets, and text originating from different sources, with the goal of expanding the utility of NLP methods to further structure patient note text for retrospective analyses."
],
[
"All statistics and tabulations were generated and performed with R Statistical Software version 3.5.2. BIBREF14 Cohen's Kappa BIBREF15 was calculated for each phenotype and pair of annotators for which precisely two note annotations were recorded. Table summarizes the calculated Cohen's Kappa coefficients."
],
[
"As this corpus of annotated patient notes comprises original healthcare data which contains protected health information (PHI) per The Health Information Portability and Accountability Act of 1996 (HIPAA) BIBREF16 and can be joined to the MIMIC-III database, individuals who wish to access to the data must satisfy all requirements to access the data contained within MIMIC-III. To satisfy these conditions, an individual who wishes to access the database must take a “Data or Specimens Management” course, as well as sign a user agreement, as outlined on the MIMIC-III database webpage, where the latest version of this database will be hosted as “Annotated Clinical Texts from MIMIC” BIBREF17. This corpus can also be accessed on GitHub after completing all of the above requirements."
],
[
"In the section, we present the performance of two well-established baseline models to automatically infer the phenotype based on the patient note, which we approach as a multi-label, multi-class text classification task BIBREF18. Each of the baseline model is a binary classifier indicating whether a given phenotype is present in the input patient note. As a result, we train a separate model for each phenotype."
],
[
"We convert each patient note into a bag of words, and give as input to a logistic regression."
],
[
"We follow the CNN architecture proposed by collobert2011natural and kim2014convolutional. We use the convolution widths from 1 to 4, and for each convolution width we set the number of filters to 100. We use dropout with a probability of $0.5$ to reduce overfitting BIBREF19. The trainable parameters were initialized using a uniform distribution from $-0.05$ to $0.05$. The model was optimized with adadelta BIBREF20. We use word2vec BIBREF21 as the word embeddings, which we pretrain on all the notes of MIMIC III v3.",
"Table presents the performance of the two baseline models (F1-score)."
],
[
"In this paper we have presented a new dataset containing discharge summaries and nursing progress notes, focusing on frequently readmitted patients and high-context social determinants of health, and originating from a large tertiary care hospital. Each patient note was annotated by at least one clinical researcher and one resident physician for 13 high-context patient phenotypes.",
"Phenotype definitions, dataset distribution, patient note statistics, inter-operator error, and the results of baseline models were reported to demonstrate that the dataset is well-suited for the development of both rule-based and statistical models for patient phenotyping. We hope that the release of this dataset will accelerate the development of algorithms for patient phenotyping, which in turn would significantly help medical research progress faster."
],
[
"The authors would like to acknowledge Kai-ou Tang and William Labadie-Moseley for assistance in the development of a graphical user interface for text annotation. We would also like to thank Philips Healthcare, The Laboratory of Computational Physiology at The Massachusetts Institute of Technology, and staff at the Beth Israel Deaconess Medical Center, Boston, for supporting the MIMIC-III database, from which these data were derived."
]
],
"section_name": [
"Introduction and Related Work",
"Data Characteristics",
"Methods",
"Limitations",
"Technical Validation",
"Usage Notes",
"Baselines",
"Baselines ::: Bag of Words + Logistic Regression",
"Baselines ::: Convolutional Neural Network (CNN)",
"Conclusion",
"Acknowledgements"
]
} | {
"answers": [
{
"annotation_id": [
"5cd994741ce6c00d026f90fd0e52e1e0fd644030",
"6f9a96aae49194adc9077160495805465e320407"
],
"answer": [
{
"evidence": [
"As this corpus of annotated patient notes comprises original healthcare data which contains protected health information (PHI) per The Health Information Portability and Accountability Act of 1996 (HIPAA) BIBREF16 and can be joined to the MIMIC-III database, individuals who wish to access to the data must satisfy all requirements to access the data contained within MIMIC-III. To satisfy these conditions, an individual who wishes to access the database must take a “Data or Specimens Management” course, as well as sign a user agreement, as outlined on the MIMIC-III database webpage, where the latest version of this database will be hosted as “Annotated Clinical Texts from MIMIC” BIBREF17. This corpus can also be accessed on GitHub after completing all of the above requirements."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"As this corpus of annotated patient notes comprises original healthcare data which contains protected health information (PHI) per The Health Information Portability and Accountability Act of 1996 (HIPAA) BIBREF16 and can be joined to the MIMIC-III database, individuals who wish to access to the data must satisfy all requirements to access the data contained within MIMIC-III. To satisfy these conditions, an individual who wishes to access the database must take a “Data or Specimens Management” course, as well as sign a user agreement, as outlined on the MIMIC-III database webpage, where the latest version of this database will be hosted as “Annotated Clinical Texts from MIMIC” BIBREF17. This corpus can also be accessed on GitHub after completing all of the above requirements."
],
"unanswerable": false,
"yes_no": false
},
{
"evidence": [
"Phenotype definitions, dataset distribution, patient note statistics, inter-operator error, and the results of baseline models were reported to demonstrate that the dataset is well-suited for the development of both rule-based and statistical models for patient phenotyping. We hope that the release of this dataset will accelerate the development of algorithms for patient phenotyping, which in turn would significantly help medical research progress faster."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"We hope that the release of this dataset will accelerate the development of algorithms for patient phenotyping, which in turn would significantly help medical research progress faster."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"0a1a6431275825458b4ba12fc3d757835d7e68e3",
"3fdbb6e77e51e6e3cc6971b1a3dccc999fdd5369"
],
"answer": [
{
"evidence": [
"We have created a dataset of discharge summaries and nursing notes, all in the English language, with a focus on frequently readmitted patients, labeled with 15 clinical patient phenotypes believed to be associated with risk of recurrent Intensive Care Unit (ICU) readmission per our domain experts (co-authors LAC, PAT, DAG) as well as the literature. BIBREF10 BIBREF11 BIBREF12"
],
"extractive_spans": [
"15 clinical patient phenotypes"
],
"free_form_answer": "",
"highlighted_evidence": [
"We have created a dataset of discharge summaries and nursing notes, all in the English language, with a focus on frequently readmitted patients, labeled with 15 clinical patient phenotypes believed to be associated with risk of recurrent Intensive Care Unit (ICU) readmission per our domain experts (co-authors LAC, PAT, DAG) as well as the literature. BIBREF10 BIBREF11 BIBREF12"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 1: The thirteen different phenotypes used for our dataset, as well the definition for each phenotype that was used to identify and annotate the phenotype."
],
"extractive_spans": [],
"free_form_answer": "Thirteen different phenotypes are present in the dataset.",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: The thirteen different phenotypes used for our dataset, as well the definition for each phenotype that was used to identify and annotate the phenotype."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"4c180a67d1d21ba4ff4d705c5fff25395fd40cb9"
],
"answer": [
{
"evidence": [
"Table defines each of the considered clinical patient phenotypes. Table counts the occurrences of these phenotypes across patient notes and Figure contains the corresponding correlation matrix. Lastly, Table presents an overview of some descriptive statistics on the patient notes' lengths.",
"FLOAT SELECTED: Table 1: The thirteen different phenotypes used for our dataset, as well the definition for each phenotype that was used to identify and annotate the phenotype."
],
"extractive_spans": [],
"free_form_answer": "Adv. Heart Disease, Adv. Lung Disease, Alcohol Abuse, Chronic Neurologic Dystrophies, Dementia, Depression, Developmental Delay, Obesity, Psychiatric disorders and Substance Abuse",
"highlighted_evidence": [
"Table defines each of the considered clinical patient phenotypes.",
"FLOAT SELECTED: Table 1: The thirteen different phenotypes used for our dataset, as well the definition for each phenotype that was used to identify and annotate the phenotype."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"zero",
"zero",
"zero"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"Is this dataset publicly available for commercial use?",
"How many different phenotypes are present in the dataset?",
"What are 10 other phenotypes that are annotated?"
],
"question_id": [
"9658b5ffb5c56e5a48a3fea0342ad8fc99741908",
"46c9e5f335b2927db995a55a18b7c7621fd3d051",
"ce0e2a8675055a5468c4c54dbb099cfd743df8a7"
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"search_query": [
"computer vision",
"computer vision",
"computer vision"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Table 1: The thirteen different phenotypes used for our dataset, as well the definition for each phenotype that was used to identify and annotate the phenotype.",
"Table 2: Data set distribution. The second and fourth columns indicate how many discharge notes and nursing notes contain a given phenotype. The third and fifth columns indicate the corresponding percentages (i.e., the percentage of discharge notes and nursing notes that contain a given phenotype).",
"Table 3: Patient note statistics. IQR stands for interquartile range.",
"Figure 1: Correlation matrices of phenotypes for Nursing Notes and Discharge Summaries.",
"Table 4: Tabulated Cohen’s Kappa values for each on the two annotator pairs: the first pair is ETM & JTW, and the second pair is JF & JW.",
"Table 5: Results of the two baseline models (F1-score in percentage). BoW stands for Bag of Words."
],
"file": [
"3-Table1-1.png",
"4-Table2-1.png",
"4-Table3-1.png",
"4-Figure1-1.png",
"5-Table4-1.png",
"5-Table5-1.png"
]
} | [
"How many different phenotypes are present in the dataset?",
"What are 10 other phenotypes that are annotated?"
] | [
[
"2003.03044-3-Table1-1.png",
"2003.03044-Data Characteristics-0"
],
[
"2003.03044-3-Table1-1.png",
"2003.03044-Data Characteristics-3"
]
] | [
"Thirteen different phenotypes are present in the dataset.",
"Adv. Heart Disease, Adv. Lung Disease, Alcohol Abuse, Chronic Neurologic Dystrophies, Dementia, Depression, Developmental Delay, Obesity, Psychiatric disorders and Substance Abuse"
] | 177 |
1610.08815 | A Deeper Look into Sarcastic Tweets Using Deep Convolutional Neural Networks | Sarcasm detection is a key task for many natural language processing tasks. In sentiment analysis, for example, sarcasm can flip the polarity of an"apparently positive"sentence and, hence, negatively affect polarity detection performance. To date, most approaches to sarcasm detection have treated the task primarily as a text categorization problem. Sarcasm, however, can be expressed in very subtle ways and requires a deeper understanding of natural language that standard text categorization techniques cannot grasp. In this work, we develop models based on a pre-trained convolutional neural network for extracting sentiment, emotion and personality features for sarcasm detection. Such features, along with the network's baseline features, allow the proposed models to outperform the state of the art on benchmark datasets. We also address the often ignored generalizability issue of classifying data that have not been seen by the models at learning phase. | {
"paragraphs": [
[
"Sarcasm is defined as “a sharp, bitter, or cutting expression or remark; a bitter gibe or taunt”. As the fields of affective computing and sentiment analysis have gained increasing popularity BIBREF0 , it is a major concern to detect sarcastic, ironic, and metaphoric expressions. Sarcasm, especially, is key for sentiment analysis as it can completely flip the polarity of opinions. Understanding the ground truth, or the facts about a given event, allows for the detection of contradiction between the objective polarity of the event (usually negative) and its sarcastic characteristic by the author (usually positive), as in “I love the pain of breakup”. Obtaining such knowledge is, however, very difficult.",
"In our experiments, we exposed the classifier to such knowledge extracted indirectly from Twitter. Namely, we used Twitter data crawled in a time period, which likely contain both the sarcastic and non-sarcastic accounts of an event or similar events. We believe that unambiguous non-sarcastic sentences provided the classifier with the ground-truth polarity of those events, which the classifier could then contrast with the opposite estimations in sarcastic sentences. Twitter is a more suitable resource for this purpose than blog posts, because the polarity of short tweets is easier to detect (as all the information necessary to detect polarity is likely to be contained in the same sentence) and because the Twitter API makes it easy to collect a large corpus of tweets containing both sarcastic and non-sarcastic examples of the same event.",
"Sometimes, however, just knowing the ground truth or simple facts on the topic is not enough, as the text may refer to other events in order to express sarcasm. For example, the sentence “If Hillary wins, she will surely be pleased to recall Monica each time she enters the Oval Office :P :D”, which refers to the 2016 US presidential election campaign and to the events of early 1990's related to the US president Clinton, is sarcastic because Hillary, a candidate and Clinton's wife, would in fact not be pleased to recall her husband's alleged past affair with Monica Lewinsky. The system, however, would need a considerable amount of facts, commonsense knowledge, anaphora resolution, and logical reasoning to draw such a conclusion. In this paper, we will not deal with such complex cases.",
"Existing works on sarcasm detection have mainly focused on unigrams and the use of emoticons BIBREF1 , BIBREF2 , BIBREF3 , unsupervised pattern mining approach BIBREF4 , semi-supervised approach BIBREF5 and n-grams based approach BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 with sentiment features. Instead, we propose a framework that learns sarcasm features automatically from a sarcasm corpus using a convolutional neural network (CNN). We also investigate whether features extracted using the pre-trained sentiment, emotion and personality models can improve sarcasm detection performance. Our approach uses relatively lower dimensional feature vectors and outperforms the state of the art on different datasets. In summary, the main contributions of this paper are the following:",
"The rest of the paper is organized as follows: Section SECREF2 proposes a brief literature review on sarcasm detection; Section SECREF4 presents the proposed approach; experimental results and thorough discussion on the experiments are given in Section SECREF5 ; finally, Section SECREF6 concludes the paper."
],
[
"NLP research is gradually evolving from lexical to compositional semantics BIBREF10 through the adoption of novel meaning-preserving and context-aware paradigms such as convolutional networks BIBREF11 , recurrent belief networks BIBREF12 , statistical learning theory BIBREF13 , convolutional multiple kernel learning BIBREF14 , and commonsense reasoning BIBREF15 . But while other NLP tasks have been extensively investigated, sarcasm detection is a relatively new research topic which has gained increasing interest only recently, partly thanks to the rise of social media analytics and sentiment analysis. Sentiment analysis BIBREF16 and using multimodal information as a new trend BIBREF17 , BIBREF18 , BIBREF19 , BIBREF20 , BIBREF14 is a popular branch of NLP research that aims to understand sentiment of documents automatically using combination of various machine learning approaches BIBREF21 , BIBREF22 , BIBREF20 , BIBREF23 .",
"An early work in this field was done by BIBREF6 on a dataset of 6,600 manually annotated Amazon reviews using a kNN-classifier over punctuation-based and pattern-based features, i.e., ordered sequence of high frequency words. BIBREF1 used support vector machine (SVM) and logistic regression over a feature set of unigrams, dictionary-based lexical features and pragmatic features (e.g., emoticons) and compared the performance of the classifier with that of humans. BIBREF24 described a set of textual features for recognizing irony at a linguistic level, especially in short texts created via Twitter, and constructed a new model that was assessed along two dimensions: representativeness and relevance. BIBREF5 used the presence of a positive sentiment in close proximity of a negative situation phrase as a feature for sarcasm detection. BIBREF25 used the Balanced Window algorithm for classifying Dutch tweets as sarcastic vs. non-sarcastic; n-grams (uni, bi and tri) and intensifiers were used as features for classification.",
" BIBREF26 compared the performance of different classifiers on the Amazon review dataset using the imbalance between the sentiment expressed by the review and the user-given star rating. Features based on frequency (gap between rare and common words), written spoken gap (in terms of difference between usage), synonyms (based on the difference in frequency of synonyms) and ambiguity (number of words with many synonyms) were used by BIBREF3 for sarcasm detection in tweets. BIBREF9 proposed the use of implicit incongruity and explicit incongruity based features along with lexical and pragmatic features, such as emoticons and punctuation marks. Their method is very much similar to the method proposed by BIBREF5 except BIBREF9 used explicit incongruity features. Their method outperforms the approach by BIBREF5 on two datasets.",
" BIBREF8 compared the performance with different language-independent features and pre-processing techniques for classifying text as sarcastic and non-sarcastic. The comparison was done over three Twitter dataset in two different languages, two of these in English with a balanced and an imbalanced distribution and the third one in Czech. The feature set included n-grams, word-shape patterns, pointedness and punctuation-based features.",
"In this work, we use features extracted from a deep CNN for sarcasm detection. Some of the key differences between the proposed approach and existing methods include the use of a relatively smaller feature set, automatic feature extraction, the use of deep networks, and the adoption of pre-trained NLP models."
],
[
"Sarcasm detection is an important subtask of sentiment analysis BIBREF27 . Since sarcastic sentences are subjective, they carry sentiment and emotion-bearing information. Most of the studies in the literature BIBREF28 , BIBREF29 , BIBREF9 , BIBREF30 include sentiment features in sarcasm detection with the use of a state-of-the-art sentiment lexicon. Below, we explain how sentiment information is key to express sarcastic opinions and the approach we undertake to exploit such information for sarcasm detection.",
"In general, most sarcastic sentences contradict the fact. In the sentence “I love the pain present in the breakups\" (Figure FIGREF4 ), for example, the word “love\" contradicts “pain present in the breakups”, because in general no-one loves to be in pain. In this case, the fact (i.e., “pain in the breakups\") and the contradictory statement to that fact (i.e., “I love\") express sentiment explicitly. Sentiment shifts from positive to negative but, according to sentic patterns BIBREF31 , the literal sentiment remains positive. Sentic patterns, in fact, aim to detect the polarity expressed by the speaker; thus, whenever the construction “I love” is encountered, the sentence is positive no matter what comes after it (e.g., “I love the movie that you hate”). In this case, however, the sentence carries sarcasm and, hence, reflects the negative sentiment of the speaker.",
"In another example (Figure FIGREF4 ), the fact, i.e., “I left the theater during the interval\", has implicit negative sentiment. The statement “I love the movie\" contradicts the fact “I left the theater during the interval\"; thus, the sentence is sarcastic. Also in this case the sentiment shifts from positive to negative and hints at the sarcastic nature of the opinion.",
"The above discussion has made clear that sentiment (and, in particular, sentiment shifts) can largely help to detect sarcasm. In order to include sentiment shifting into the proposed framework, we train a sentiment model for sentiment-specific feature extraction. Training with a CNN helps to combine the local features in the lower layers into global features in the higher layers. We do not make use of sentic patterns BIBREF31 in this paper but we do plan to explore that research direction as a part of our future work. In the literature, it is found that sarcasm is user-specific too, i.e., some users have a particular tendency to post more sarcastic tweets than others. This acts as a primary intuition for us to extract personality-based features for sarcasm detection."
],
[
"As discussed in the literature BIBREF5 , sarcasm detection may depend on sentiment and other cognitive aspects. For this reason, we incorporate both sentiment and emotion clues in our framework. Along with these, we also argue that personality of the opinion holder is an important factor for sarcasm detection. In order to address all of these variables, we create different models for each of them, namely: sentiment, emotion and personality. The idea is to train each model on its corresponding benchmark dataset and, hence, use such pre-trained models together to extract sarcasm-related features from the sarcasm datasets.",
"Now, the viable research question here is - Do these models help to improve sarcasm detection performance?' Literature shows that they improve the performance but not significantly. Thus, do we need to consider those factors in spotting sarcastic sentences? Aren't n-grams enough for sarcasm detection? Throughout the rest of this paper, we address these questions in detail. The training of each model is done using a CNN. Below, we explain the framework in detail. Then, we discuss the pre-trained models. Figure FIGREF6 presents a visualization of the proposed framework."
],
[
"CNN can automatically extract key features from the training data. It grasps contextual local features from a sentence and, after several convolution operations, it forms a global feature vector out of those local features. CNN does not need the hand-crafted features used in traditional supervised classifiers. Such hand-crafted features are difficult to compute and a good guess for encoding the features is always necessary in order to get satisfactory results. CNN, instead, uses a hierarchy of local features which are important to learn context. The hand-crafted features often ignore such a hierarchy of local features.",
"Features extracted by CNN can therefore be used instead of hand-crafted features, as they carry more useful information. The idea behind convolution is to take the dot product of a vector of INLINEFORM0 weights INLINEFORM1 also known as kernel vector with each INLINEFORM2 -gram in the sentence INLINEFORM3 to obtain another sequence of features INLINEFORM4 . DISPLAYFORM0 ",
"Thus, a max pooling operation is applied over the feature map and the maximum value INLINEFORM0 is taken as the feature corresponding to this particular kernel vector. Similarly, varying kernel vectors and window sizes are used to obtain multiple features BIBREF32 . For each word INLINEFORM1 in the vocabulary, a INLINEFORM2 -dimensional vector representation is given in a look up table that is learned from the data BIBREF33 . The vector representation of a sentence, hence, is a concatenation of vectors for individual words. Similarly, we can have look up tables for other features. One might want to provide features other than words if these features are suspected to be helpful. The convolution kernels are then applied to word vectors instead of individual words.",
"We use these features to train higher layers of the CNN, in order to represent bigger groups of words in sentences. We denote the feature learned at hidden neuron INLINEFORM0 in layer INLINEFORM1 as INLINEFORM2 . Multiple features may be learned in parallel in the same CNN layer. The features learned in each layer are used to train the next layer: DISPLAYFORM0 ",
"where * indicates convolution and INLINEFORM0 is a weight kernel for hidden neuron INLINEFORM1 and INLINEFORM2 is the total number of hidden neurons. The CNN sentence model preserves the order of words by adopting convolution kernels of gradually increasing sizes that span an increasing number of words and ultimately the entire sentence. As mentioned above, each word in a sentence is represented using word embeddings.",
"We employ the publicly available word2vec vectors, which were trained on 100 billion words from Google News. The vectors are of dimensionality 300, trained using the continuous bag-of-words architecture BIBREF33 . Words not present in the set of pre-trained words are initialized randomly. However, while training the neural network, we use non-static representations. These include the word vectors, taken as input, into the list of parameters to be learned during training.",
"Two primary reasons motivated us to use non-static channels as opposed to static ones. Firstly, the common presence of informal language and words in tweets resulted in a relatively high random initialization of word vectors due to the unavailability of these words in the word2vec dictionary. Secondly, sarcastic sentences are known to include polarity shifts in sentimental and emotional degrees. For example, “I love the pain present in breakups\" is a sarcastic sentence with a significant change in sentimental polarity. As word2vec was not trained to incorporate these nuances, we allow our models to update the embeddings during training in order to include them. Each sentence is wrapped to a window of INLINEFORM0 , where INLINEFORM1 is the maximum number of words amongst all sentences in the dataset. We use the output of the fully-connected layer of the network as our feature vector.",
"We have done two kinds of experiments: firstly, we used CNN for the classification; secondly, we extracted features from the fully-connected layer of the CNN and fed them to an SVM for the final classification. The latter CNN-SVM scheme is quite useful for text classification as shown by Poria et al. BIBREF18 . We carry out n-fold cross-validation on the dataset using CNN. In every fold iteration, in order to obtain the training and test features, the output of the fully-connected layer is treated as features to be used for the final classification using SVM. Table TABREF12 shows the training settings for each CNN model developed in this work. ReLU is used as the non-linear activation function of the network. The network configurations of all models developed in this work are given in Table TABREF12 ."
],
[
"As discussed above, sentiment clues play an important role for sarcastic sentence detection. In our work, we train a CNN (see Section SECREF5 for details) on a sentiment benchmark dataset. This pre-trained model is then used to extract features from the sarcastic datasets. In particular, we use Semeval 2014 BIBREF34 Twitter Sentiment Analysis Dataset for the training. This dataset contains 9,497 tweets out of which 5,895 are positive, 3,131 are negative and 471 are neutral. The fully-connected layer of the CNN used for sentiment feature extraction has 100 neurons, so 100 features are extracted from this pre-trained model. The final softmax determines whether a sentence is positive, negative or neutral. Thus, we have three neurons in the softmax layer."
],
[
"We use the CNN structure as described in Section SECREF5 for emotional feature extraction. As a dataset for extracting emotion-related features, we use the corpus developed by BIBREF35 . This dataset consists of blog posts labeled by their corresponding emotion categories. As emotion taxonomy, the authors used six basic emotions, i.e., Anger, Disgust, Surprise, Sadness, Joy and Fear. In particular, the blog posts were split into sentences and each sentence was labeled. The dataset contains 5,205 sentences labeled by one of the emotion labels. After employing this model on the sarcasm dataset, we obtained a 150-dimensional feature vector from the fully-connected layer. As the aim of training is to classify each sentence into one of the six emotion classes, we used six neurons in the softmax layer."
],
[
"Detecting personality from text is a well-known challenging problem. In our work, we use five personality traits described by BIBREF36 , i.e., Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism, sometimes abbreviated as OCEAN (by their first letters). As a training dataset, we use the corpus developed by BIBREF36 , which contains 2,400 essays labeled by one of the five personality traits each.",
"The fully-connected layer has 150 neurons, which are treated as the features. We concatenate the feature vector of each personality dimension in order to create the final feature vector. Thus, the personality model ultimately extracts a 750-dimensional feature vector (150-dimensional feature vector for each of the five personality traits). This network is replicated five times, one for each personality trait. In particular, we create a CNN for each personality trait and the aim of each CNN is to classify a sentence into binary classes, i.e., whether it expresses a personality trait or not."
],
[
"CNN can also be employed on the sarcasm datasets in order to identify sarcastic and non-sarcastic tweets. We term the features extracted from this network baseline features, the method as baseline method and the CNN architecture used in this baseline method as baseline CNN. Since the fully-connected layer has 100 neurons, we have 100 baseline features in our experiment. This method is termed baseline method as it directly aims to classify a sentence as sarcastic vs non-sarcastic. The baseline CNN extracts the inherent semantics from the sarcastic corpus by employing deep domain understanding. The process of using baseline features with other features extracted from the pre-trained model is described in Section SECREF24 ."
],
[
"In this section, we present the experimental results using different feature combinations and compare them with the state of the art. For each feature we show the results using only CNN and using CNN-SVM (i.e., when the features extracted by CNN are fed to the SVM). Macro-F1 measure is used as an evaluation scheme in the experiments."
],
[
"This dataset was created by BIBREF8 . The tweets were downloaded from Twitter using #sarcasm as a marker for sarcastic tweets. It is a monolingual English dataset which consists of a balanced distribution of 50,000 sarcastic tweets and 50,000 non-sarcastic tweets.",
"Since sarcastic tweets are less frequently used BIBREF8 , we also need to investigate the robustness of the selected features and the model trained on these features on an imbalanced dataset. To this end, we used another English dataset from BIBREF8 . It consists of 25,000 sarcastic tweets and 75,000 non-sarcastic tweets.",
"We have obtained this dataset from The Sarcasm Detector. It contains 120,000 tweets, out of which 20,000 are sarcastic and 100,000 are non-sarcastic. We randomly sampled 10,000 sarcastic and 20,000 non-sarcastic tweets from the dataset. Visualization of both the original and subset data show similar characteristics.",
"A two-step methodology has been employed in filtering the datasets used in our experiments. Firstly, we identified and removed all the “user\", “URL\" and “hashtag\" references present in the tweets using efficient regular expressions. Special emphasis was given to this step to avoid traces of hashtags, which might trigger the models to provide biased results. Secondly, we used NLTK Twitter Tokenizer to ensure proper tokenization of words along with special symbols and emoticons. Since our deep CNNs extract contextual information present in tweets, we include emoticons as part of the vocabulary. This enables the emoticons to hold a place in the word embedding space and aid in providing information about the emotions present in the sentence."
],
[
"Throughout this research, we have carried out several experiments with various feature combinations. For the sake of clarity, we explain below how the features extracted using difference models are merged.",
"In the standard feature merging process, we first extract the features from all deep CNN based feature extraction models and then we concatenate them. Afterwards, SVM is employed on the resulted feature vector.",
"In another setting, we use the features extracted from the pre-trained models as the static channels of features in the CNN of the baseline method. These features are appended to the hidden layer of the baseline CNN, preceding the final output softmax layer.",
"For comparison, we have re-implemented the state-of-the-art methods. Since BIBREF9 did not mention about the sentiment lexicon they use in the experiment, we used SenticNet BIBREF37 in the re-implementation of their method."
],
[
"As shown in Table TABREF29 , for every feature CNN-SVM outperforms the performance of the CNN. Following BIBREF6 , we have carried out a 5-fold cross-validation on this dataset. The baseline features ( SECREF16 ) perform best among other features. Among all the pre-trained models, the sentiment model (F1-score: 87.00%) achieves better performance in comparison with the other two pre-trained models. Interestingly, when we merge the baseline features with the features extracted by the pre-trained deep NLP models, we only get 0.11% improvement over the F-score. It means that the baseline features alone are quite capable to detect sarcasm. On the other hand, when we combine sentiment, emotion and personality features, we obtain 90.70% F1-score. This indicates that the pre-trained features are indeed useful for sarcasm detection. We also compare our approach with the best research study conducted on this dataset (Table TABREF30 ). Both the proposed baseline model and the baseline + sentiment + emotion + personality model outperform the state of the art BIBREF9 , BIBREF8 . One important difference with the state of the art is that BIBREF8 used relatively larger feature vector size ( INLINEFORM0 500,000) than we used in our experiment (1,100). This not only prevents our model to overfit the data but also speeds up the computation. Thus, we obtain an improvement in the overall performance with automatic feature extraction using a relatively lower dimensional feature space.",
"In the literature, word n-grams, skipgrams and character n-grams are used as baseline features. According to Ptacek et al. BIBREF8 , these baseline features along with the other features (sentiment features and part-of-speech based features) produced the best performance. However, Ptacek et al. did not analyze the performance of these features when they were not used with the baseline features. Pre-trained word embeddings play an important role in the performance of the classifier because, when we use randomly generated embeddings, performance falls down to 86.23% using all features."
],
[
"5-fold cross-validation has been carried out on Dataset 2. Also for this dataset, we get the best accuracy when we use all features. Baseline features have performed significantly better (F1-score: 92.32%) than all other features. Supporting the observations we have made from the experiments on Dataset 1, we see CNN-SVM outperforming CNN on Dataset 2. However, when we use all the features, CNN alone (F1-score: 89.73%) does not outperform the state of the art BIBREF8 (F1-score: 92.37%). As shown in Table TABREF30 , CNN-SVM on the baseline + sentiment + emotion + personality feature set outperforms the state of the art (F1-score: 94.80%). Among the pre-trained models, the sentiment model performs best (F1-score: 87.00%).",
"Table TABREF29 shows the performance of different feature combinations. The gap between the F1-scores of only baseline features and all features is larger on the imbalanced dataset than the balanced dataset. This supports our claim that sentiment, emotion and personality features are very useful for sarcasm detection, thanks to the pre-trained models. The F1-score using sentiment features when combined with baseline features is 94.60%. On both of the datasets, emotion and sentiment features perform better than the personality features. Interestingly, using only sentiment, emotion and personality features, we achieve 90.90% F1-score."
],
[
"Experimental results on Dataset 3 show the similar trends (Table TABREF30 ) as compared to Dataset 1 and Dataset 2. The highest performance (F1-score 93.30%) is obtained when we combine baseline features with sentiment, emotion and personality features. In this case, also CNN-SVM consistently performs better than CNN for every feature combination. The sentiment model is found to be the best pre-trained model. F1-score of 84.43% is obtained when we merge sentiment, emotion and personality features.",
"Dataset 3 is more complex and non-linear in nature compared to the other two datasets. As shown in Table TABREF30 , the methods by BIBREF9 and BIBREF8 perform poorly on this dataset. The TP rate achieved by BIBREF9 is only 10.07% and that means their method suffers badly on complex data. The approach of BIBREF8 has also failed to perform well on Dataset 3, achieving 62.37% with a better TP rate of 22.15% than BIBREF9 . On the other hand, our proposed model performs consistently well on this dataset achieving 93.30%."
],
[
"To test the generalization capability of the proposed approach, we perform training on Dataset 1 and test on Dataset 3. The F1-score drops down dramatically to 33.05%. In order to understand this finding, we visualize each dataset using PCA (Figure FIGREF17 ). It depicts that, although Dataset 1 is mostly linearly separable, Dataset 3 is not. A linear kernel that performs well on Dataset 1 fails to provide good performance on Dataset 3. If we use RBF kernel, it overfits the data and produces worse results than what we get using linear kernel. Similar trends are seen in the performance of other two state-of-the-art approaches BIBREF9 , BIBREF8 . Thus, we decide to perform training on Dataset 3 and test on the Dataset 1. As expected better performance is obtained with F1-score 76.78%. However, the other two state-of-the-art approaches fail to perform well in this setting. While the method by BIBREF9 obtains F1-score of 47.32%, the approach by BIBREF8 achieves 53.02% F1-score when trained on Dataset 3 and tested on Dataset 1. Below, we discuss about this generalizability issue of the models developed or referred in this paper.",
"As discussed in the introduction, sarcasm is very much topic-dependent and highly contextual. For example, let us consider the tweet “I am so glad to see Tanzania played very well, I can now sleep well :P\". Unless one knows that Tanzania actually did not play well in that game, it is not possible to spot the sarcastic nature of this sentence. Thus, an n-gram based sarcasm detector trained at time INLINEFORM0 may perform poorly to detect sarcasm in the tweets crawled at time INLINEFORM1 (given that there is a considerable gap between these time stamps) because of the diversity of the topics (new events occur, new topics are discussed) of the tweets. Sentiment and other contextual clues can help to spot the sarcastic nature in this kind of tweets. A highly positive statement which ends with a emoticon expressing joke can be sarcastic.",
"State-of-the-art methods lack these contextual information which, in our case, we extract using pre-trained sentiment, emotion and personality models. Not only these pre-trained models, the baseline method (baseline CNN architecture) performs better than the state-of-the-art models in this generalizability test setting. In our generalizability test, when the pre-trained features are used with baseline features, we get 4.19% F1-score improvement over the baseline features. On the other hand, when they are not used with the baseline features, together they produce 64.25% F1-score.",
"Another important fact is that an n-grams model cannot perform well on unseen data unless it is trained on a very large corpus. If most of the n-grams extracted from the unseen data are not in the vocabulary of the already trained n-grams model, in fact, the model will produce a very sparse feature vector representation of the dataset. Instead, we use the word2vec embeddings as the source of the features, as word2vec allows for the computation of similarities between unseen data and training data."
],
[
"Our experimental results show that the baseline features outperform the pre-trained features for sarcasm detection. However, the combination of pre-trained features and baseline features beats both of themselves alone. It is counterintuitive, since experimental results prove that both of those features learn almost the same global and contextual features. In particular, baseline network dominates over pre-trained network as the former learns most of the features learned by the latter. Nonetheless, the combination of baseline and pre-trained classifiers improves the overall performance and generalizability, hence proving their effectiveness in sarcasm detection. Experimental results show that sentiment and emotion features are the most useful features, besides baseline features (Figure FIGREF36 ). Therefore, in order to reach a better understanding of the relation between personality features among themselves and with other pre-trained features, we carried out Spearman correlation testing. Results, displayed in Table TABREF39 , show that those features are highly correlated with each other."
],
[
"In this work, we developed pre-trained sentiment, emotion and personality models for identifying sarcastic text using CNN, which are found to be very effective for sarcasm detection. In the future, we plan to evaluate the performance of the proposed method on a large corpus and other domain-dependent corpora. Future work will also focus on analyzing past tweets and activities of users in order to better understand their personality and profile and, hence, further improve the disambiguation between sarcastic and non-sarcastic text. "
]
],
"section_name": [
"Introduction",
"Related Works",
"Sentiment Analysis and Sarcasm Detection",
"The Proposed Framework",
"General CNN Framework",
"Sentiment Feature Extraction Model",
"Emotion Feature Extraction Model",
"Personality Feature Extraction Model",
"Baseline Method and Features",
"Experimental Results and Discussion",
"Sarcasm Datasets Used in the Experiment",
"Merging the Features",
"Results on Dataset 1",
"Results on Dataset 2",
"Results on Dataset 3",
"Testing Generalizability of the Models and Discussions",
"Baseline Features vs Pre-trained Features",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"17369059985532aaf5acfc43ed04210856589747",
"41998d3e4e187d09043f5064787447653c7c20fb"
],
"answer": [
{
"evidence": [
"To test the generalization capability of the proposed approach, we perform training on Dataset 1 and test on Dataset 3. The F1-score drops down dramatically to 33.05%. In order to understand this finding, we visualize each dataset using PCA (Figure FIGREF17 ). It depicts that, although Dataset 1 is mostly linearly separable, Dataset 3 is not. A linear kernel that performs well on Dataset 1 fails to provide good performance on Dataset 3. If we use RBF kernel, it overfits the data and produces worse results than what we get using linear kernel. Similar trends are seen in the performance of other two state-of-the-art approaches BIBREF9 , BIBREF8 . Thus, we decide to perform training on Dataset 3 and test on the Dataset 1. As expected better performance is obtained with F1-score 76.78%. However, the other two state-of-the-art approaches fail to perform well in this setting. While the method by BIBREF9 obtains F1-score of 47.32%, the approach by BIBREF8 achieves 53.02% F1-score when trained on Dataset 3 and tested on Dataset 1. Below, we discuss about this generalizability issue of the models developed or referred in this paper."
],
"extractive_spans": [
"BIBREF9 ",
"BIBREF8 "
],
"free_form_answer": "",
"highlighted_evidence": [
"Similar trends are seen in the performance of other two state-of-the-art approaches BIBREF9 , BIBREF8 ."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"As shown in Table TABREF29 , for every feature CNN-SVM outperforms the performance of the CNN. Following BIBREF6 , we have carried out a 5-fold cross-validation on this dataset. The baseline features ( SECREF16 ) perform best among other features. Among all the pre-trained models, the sentiment model (F1-score: 87.00%) achieves better performance in comparison with the other two pre-trained models. Interestingly, when we merge the baseline features with the features extracted by the pre-trained deep NLP models, we only get 0.11% improvement over the F-score. It means that the baseline features alone are quite capable to detect sarcasm. On the other hand, when we combine sentiment, emotion and personality features, we obtain 90.70% F1-score. This indicates that the pre-trained features are indeed useful for sarcasm detection. We also compare our approach with the best research study conducted on this dataset (Table TABREF30 ). Both the proposed baseline model and the baseline + sentiment + emotion + personality model outperform the state of the art BIBREF9 , BIBREF8 . One important difference with the state of the art is that BIBREF8 used relatively larger feature vector size ( INLINEFORM0 500,000) than we used in our experiment (1,100). This not only prevents our model to overfit the data but also speeds up the computation. Thus, we obtain an improvement in the overall performance with automatic feature extraction using a relatively lower dimensional feature space."
],
"extractive_spans": [
"BIBREF9 , BIBREF8"
],
"free_form_answer": "",
"highlighted_evidence": [
"Both the proposed baseline model and the baseline + sentiment + emotion + personality model outperform the state of the art BIBREF9 , BIBREF8 . "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"51fa5632abfcb3a9e5502d4d786a7fdc0091314b",
"de794045b3cd1b83fa48188b911aacf79d5fd6bc"
],
"answer": [
{
"evidence": [
"As discussed above, sentiment clues play an important role for sarcastic sentence detection. In our work, we train a CNN (see Section SECREF5 for details) on a sentiment benchmark dataset. This pre-trained model is then used to extract features from the sarcastic datasets. In particular, we use Semeval 2014 BIBREF34 Twitter Sentiment Analysis Dataset for the training. This dataset contains 9,497 tweets out of which 5,895 are positive, 3,131 are negative and 471 are neutral. The fully-connected layer of the CNN used for sentiment feature extraction has 100 neurons, so 100 features are extracted from this pre-trained model. The final softmax determines whether a sentence is positive, negative or neutral. Thus, we have three neurons in the softmax layer.",
"Sarcasm Datasets Used in the Experiment",
"This dataset was created by BIBREF8 . The tweets were downloaded from Twitter using #sarcasm as a marker for sarcastic tweets. It is a monolingual English dataset which consists of a balanced distribution of 50,000 sarcastic tweets and 50,000 non-sarcastic tweets.",
"Since sarcastic tweets are less frequently used BIBREF8 , we also need to investigate the robustness of the selected features and the model trained on these features on an imbalanced dataset. To this end, we used another English dataset from BIBREF8 . It consists of 25,000 sarcastic tweets and 75,000 non-sarcastic tweets.",
"We have obtained this dataset from The Sarcasm Detector. It contains 120,000 tweets, out of which 20,000 are sarcastic and 100,000 are non-sarcastic. We randomly sampled 10,000 sarcastic and 20,000 non-sarcastic tweets from the dataset. Visualization of both the original and subset data show similar characteristics."
],
"extractive_spans": [
"Semeval 2014 BIBREF34 Twitter Sentiment Analysis Dataset ",
" dataset was created by BIBREF8",
" English dataset from BIBREF8",
" dataset from The Sarcasm Detector"
],
"free_form_answer": "",
"highlighted_evidence": [
" In particular, we use Semeval 2014 BIBREF34 Twitter Sentiment Analysis Dataset for the training",
"Sarcasm Datasets Used in the Experiment\nThis dataset was created by BIBREF8 . The tweets were downloaded from Twitter using #sarcasm as a marker for sarcastic tweets. It is a monolingual English dataset which consists of a balanced distribution of 50,000 sarcastic tweets and 50,000 non-sarcastic tweets.\n\nSince sarcastic tweets are less frequently used BIBREF8 , we also need to investigate the robustness of the selected features and the model trained on these features on an imbalanced dataset. To this end, we used another English dataset from BIBREF8 . It consists of 25,000 sarcastic tweets and 75,000 non-sarcastic tweets.\n\nWe have obtained this dataset from The Sarcasm Detector. It contains 120,000 tweets, out of which 20,000 are sarcastic and 100,000 are non-sarcastic. We randomly sampled 10,000 sarcastic and 20,000 non-sarcastic tweets from the dataset. Visualization of both the original and subset data show similar characteristics."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"This dataset was created by BIBREF8 . The tweets were downloaded from Twitter using #sarcasm as a marker for sarcastic tweets. It is a monolingual English dataset which consists of a balanced distribution of 50,000 sarcastic tweets and 50,000 non-sarcastic tweets.",
"Since sarcastic tweets are less frequently used BIBREF8 , we also need to investigate the robustness of the selected features and the model trained on these features on an imbalanced dataset. To this end, we used another English dataset from BIBREF8 . It consists of 25,000 sarcastic tweets and 75,000 non-sarcastic tweets.",
"We have obtained this dataset from The Sarcasm Detector. It contains 120,000 tweets, out of which 20,000 are sarcastic and 100,000 are non-sarcastic. We randomly sampled 10,000 sarcastic and 20,000 non-sarcastic tweets from the dataset. Visualization of both the original and subset data show similar characteristics."
],
"extractive_spans": [
"This dataset was created by BIBREF8",
"another English dataset from BIBREF8 ",
" dataset from The Sarcasm Detector"
],
"free_form_answer": "",
"highlighted_evidence": [
"This dataset was created by BIBREF8 . The tweets were downloaded from Twitter using #sarcasm as a marker for sarcastic tweets. It is a monolingual English dataset which consists of a balanced distribution of 50,000 sarcastic tweets and 50,000 non-sarcastic tweets.\n\nSince sarcastic tweets are less frequently used BIBREF8 , we also need to investigate the robustness of the selected features and the model trained on these features on an imbalanced dataset. To this end, we used another English dataset from BIBREF8 . It consists of 25,000 sarcastic tweets and 75,000 non-sarcastic tweets.\n\nWe have obtained this dataset from The Sarcasm Detector. It contains 120,000 tweets, out of which 20,000 are sarcastic and 100,000 are non-sarcastic. We randomly sampled 10,000 sarcastic and 20,000 non-sarcastic tweets from the dataset. Visualization of both the original and subset data show similar characteristics."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"0a241ac808bb91d69351219c3f12d7ac04d27f21"
],
"answer": [
{
"evidence": [
"CNN can also be employed on the sarcasm datasets in order to identify sarcastic and non-sarcastic tweets. We term the features extracted from this network baseline features, the method as baseline method and the CNN architecture used in this baseline method as baseline CNN. Since the fully-connected layer has 100 neurons, we have 100 baseline features in our experiment. This method is termed baseline method as it directly aims to classify a sentence as sarcastic vs non-sarcastic. The baseline CNN extracts the inherent semantics from the sarcastic corpus by employing deep domain understanding. The process of using baseline features with other features extracted from the pre-trained model is described in Section SECREF24 ."
],
"extractive_spans": [],
"free_form_answer": " The features extracted from CNN.",
"highlighted_evidence": [
"CNN can also be employed on the sarcasm datasets in order to identify sarcastic and non-sarcastic tweets. We term the features extracted from this network baseline features, the method as baseline method and the CNN architecture used in this baseline method as baseline CNN. Since the fully-connected layer has 100 neurons, we have 100 baseline features in our experiment. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
],
"nlp_background": [
"",
"",
""
],
"paper_read": [
"",
"",
""
],
"question": [
"What are the state of the art models?",
"Which benchmark datasets are used?",
"What are the network's baseline features?"
],
"question_id": [
"3a6e843c6c81244c14730295cfb8b865cd7ede46",
"fabf6fdcfb4c4c7affaa1e4336658c1e6635b1bf",
"1beb4a590fa6127a138f4ed1dd13d5d51cc96809"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"",
"",
""
]
} | {
"caption": [
"Figure 1: Sentiment shifting can be an indicator of sarcasm.",
"Figure 2: The proposed framework: deep CNNs are combined together to detect sarcastic tweets.",
"Table 1: Training settings for each deep model. Legenda: FC = Fully-Connected, S = Sentiment model, E = Emotion model, P = Personality model, B = Baseline model.",
"Figure 3: Visualization of the data.",
"Table 2: Experimental Results. Legenda: B = Baseline, S = Sentiment, E = Emotion, P = Personality, 5-fold cross-validation is carried out for all the experiments.",
"Table 3: Performance comparison of the proposed method and the state-of-the-art approaches. Legenda: D3 => D1 is the model trained on Dataset 3 and tested on Dataset 1.",
"Figure 4: Plot of the performance of different feature combinations and methods.",
"Table 4: Spearman’s correlations between different features. * Correlation is significant at the 0.05 level."
],
"file": [
"3-Figure1-1.png",
"4-Figure2-1.png",
"6-Table1-1.png",
"7-Figure3-1.png",
"8-Table2-1.png",
"9-Table3-1.png",
"10-Figure4-1.png",
"10-Table4-1.png"
]
} | [
"What are the network's baseline features?"
] | [
[
"1610.08815-Baseline Method and Features-0"
]
] | [
" The features extracted from CNN."
] | 178 |
1909.00015 | Adaptively Sparse Transformers | Attention mechanisms have become ubiquitous in NLP. Recent architectures, notably the Transformer, learn powerful context-aware word representations through layered, multi-headed attention. The multiple heads learn diverse types of word relationships. However, with standard softmax attention, all attention heads are dense, assigning a non-zero weight to all context words. In this work, we introduce the adaptively sparse Transformer, wherein attention heads have flexible, context-dependent sparsity patterns. This sparsity is accomplished by replacing softmax with $\alpha$-entmax: a differentiable generalization of softmax that allows low-scoring words to receive precisely zero weight. Moreover, we derive a method to automatically learn the $\alpha$ parameter -- which controls the shape and sparsity of $\alpha$-entmax -- allowing attention heads to choose between focused or spread-out behavior. Our adaptively sparse Transformer improves interpretability and head diversity when compared to softmax Transformers on machine translation datasets. Findings of the quantitative and qualitative analysis of our approach include that heads in different layers learn different sparsity preferences and tend to be more diverse in their attention distributions than softmax Transformers. Furthermore, at no cost in accuracy, sparsity in attention heads helps to uncover different head specializations. | {
"paragraphs": [
[
"The Transformer architecture BIBREF0 for deep neural networks has quickly risen to prominence in NLP through its efficiency and performance, leading to improvements in the state of the art of Neural Machine Translation BIBREF1, BIBREF2, as well as inspiring other powerful general-purpose models like BERT BIBREF3 and GPT-2 BIBREF4. At the heart of the Transformer lie multi-head attention mechanisms: each word is represented by multiple different weighted averages of its relevant context. As suggested by recent works on interpreting attention head roles, separate attention heads may learn to look for various relationships between tokens BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9.",
"The attention distribution of each head is predicted typically using the softmax normalizing transform. As a result, all context words have non-zero attention weight. Recent work on single attention architectures suggest that using sparse normalizing transforms in attention mechanisms such as sparsemax – which can yield exactly zero probabilities for irrelevant words – may improve performance and interpretability BIBREF12, BIBREF13, BIBREF14. Qualitative analysis of attention heads BIBREF0 suggests that, depending on what phenomena they capture, heads tend to favor flatter or more peaked distributions.",
"Recent works have proposed sparse Transformers BIBREF10 and adaptive span Transformers BIBREF11. However, the “sparsity\" of those models only limits the attention to a contiguous span of past tokens, while in this work we propose a highly adaptive Transformer model that is capable of attending to a sparse set of words that are not necessarily contiguous. Figure FIGREF1 shows the relationship of these methods with ours.",
"Our contributions are the following:",
"We introduce sparse attention into the Transformer architecture, showing that it eases interpretability and leads to slight accuracy gains.",
"We propose an adaptive version of sparse attention, where the shape of each attention head is learnable and can vary continuously and dynamically between the dense limit case of softmax and the sparse, piecewise-linear sparsemax case.",
"We make an extensive analysis of the added interpretability of these models, identifying both crisper examples of attention head behavior observed in previous work, as well as novel behaviors unraveled thanks to the sparsity and adaptivity of our proposed model."
],
[
"In NMT, the Transformer BIBREF0 is a sequence-to-sequence (seq2seq) model which maps an input sequence to an output sequence through hierarchical multi-head attention mechanisms, yielding a dynamic, context-dependent strategy for propagating information within and across sentences. It contrasts with previous seq2seq models, which usually rely either on costly gated recurrent operations BIBREF15, BIBREF16 or static convolutions BIBREF17.",
"Given $n$ query contexts and $m$ sequence items under consideration, attention mechanisms compute, for each query, a weighted representation of the items. The particular attention mechanism used in BIBREF0 is called scaled dot-product attention, and it is computed in the following way:",
"where $\\mathbf {Q} \\in \\mathbb {R}^{n \\times d}$ contains representations of the queries, $\\mathbf {K}, \\mathbf {V} \\in \\mathbb {R}^{m \\times d}$ are the keys and values of the items attended over, and $d$ is the dimensionality of these representations. The $\\mathbf {\\pi }$ mapping normalizes row-wise using softmax, $\\mathbf {\\pi }(\\mathbf {Z})_{ij} = \\operatornamewithlimits{\\mathsf {softmax}}(\\mathbf {z}_i)_j$, where",
"In words, the keys are used to compute a relevance score between each item and query. Then, normalized attention weights are computed using softmax, and these are used to weight the values of each item at each query context.",
"However, for complex tasks, different parts of a sequence may be relevant in different ways, motivating multi-head attention in Transformers. This is simply the application of Equation DISPLAY_FORM7 in parallel $H$ times, each with a different, learned linear transformation that allows specialization:",
"In the Transformer, there are three separate multi-head attention mechanisms for distinct purposes:",
"Encoder self-attention: builds rich, layered representations of each input word, by attending on the entire input sentence.",
"Context attention: selects a representative weighted average of the encodings of the input words, at each time step of the decoder.",
"Decoder self-attention: attends over the partial output sentence fragment produced so far.",
"Together, these mechanisms enable the contextualized flow of information between the input sentence and the sequential decoder."
],
[
"The softmax mapping (Equation DISPLAY_FORM8) is elementwise proportional to $\\exp $, therefore it can never assign a weight of exactly zero. Thus, unnecessary items are still taken into consideration to some extent. Since its output sums to one, this invariably means less weight is assigned to the relevant items, potentially harming performance and interpretability BIBREF18. This has motivated a line of research on learning networks with sparse mappings BIBREF19, BIBREF20, BIBREF21, BIBREF22. We focus on a recently-introduced flexible family of transformations, $\\alpha $-entmax BIBREF23, BIBREF14, defined as:",
"where $\\triangle ^d \\lbrace \\mathbf {p}\\in \\mathbb {R}^d:\\sum _{i} p_i = 1\\rbrace $ is the probability simplex, and, for $\\alpha \\ge 1$, $\\mathsf {H}^{\\textsc {T}}_\\alpha $ is the Tsallis continuous family of entropies BIBREF24:",
"This family contains the well-known Shannon and Gini entropies, corresponding to the cases $\\alpha =1$ and $\\alpha =2$, respectively.",
"Equation DISPLAY_FORM14 involves a convex optimization subproblem. Using the definition of $\\mathsf {H}^{\\textsc {T}}_\\alpha $, the optimality conditions may be used to derive the following form for the solution (Appendix SECREF83):",
"where $[\\cdot ]_+$ is the positive part (ReLU) function, $\\mathbf {1}$ denotes the vector of all ones, and $\\tau $ – which acts like a threshold – is the Lagrange multiplier corresponding to the $\\sum _i p_i=1$ constraint."
],
[
"The appeal of $\\alpha $-entmax for attention rests on the following properties. For $\\alpha =1$ (i.e., when $\\mathsf {H}^{\\textsc {T}}_\\alpha $ becomes the Shannon entropy), it exactly recovers the softmax mapping (We provide a short derivation in Appendix SECREF89.). For all $\\alpha >1$ it permits sparse solutions, in stark contrast to softmax. In particular, for $\\alpha =2$, it recovers the sparsemax mapping BIBREF19, which is piecewise linear. In-between, as $\\alpha $ increases, the mapping continuously gets sparser as its curvature changes.",
"To compute the value of $\\alpha $-entmax, one must find the threshold $\\tau $ such that the r.h.s. in Equation DISPLAY_FORM16 sums to one. BIBREF23 propose a general bisection algorithm. BIBREF14 introduce a faster, exact algorithm for $\\alpha =1.5$, and enable using $\\mathop {\\mathsf {\\alpha }\\textnormal {-}\\mathsf {entmax }}$ with fixed $\\alpha $ within a neural network by showing that the $\\alpha $-entmax Jacobian w.r.t. $\\mathbf {z}$ for $\\mathbf {p}^\\star = \\mathop {\\mathsf {\\alpha }\\textnormal {-}\\mathsf {entmax }}(\\mathbf {z})$ is",
"Our work furthers the study of $\\alpha $-entmax by providing a derivation of the Jacobian w.r.t. the hyper-parameter $\\alpha $ (Section SECREF3), thereby allowing the shape and sparsity of the mapping to be learned automatically. This is particularly appealing in the context of multi-head attention mechanisms, where we shall show in Section SECREF35 that different heads tend to learn different sparsity behaviors."
],
[
"We now propose a novel Transformer architecture wherein we simply replace softmax with $\\alpha $-entmax in the attention heads. Concretely, we replace the row normalization $\\mathbf {\\pi }$ in Equation DISPLAY_FORM7 by",
"This change leads to sparse attention weights, as long as $\\alpha >1$; in particular, $\\alpha =1.5$ is a sensible starting point BIBREF14."
],
[
"Unlike LSTM-based seq2seq models, where $\\alpha $ can be more easily tuned by grid search, in a Transformer, there are many attention heads in multiple layers. Crucial to the power of such models, the different heads capture different linguistic phenomena, some of them isolating important words, others spreading out attention across phrases BIBREF0. This motivates using different, adaptive $\\alpha $ values for each attention head, such that some heads may learn to be sparser, and others may become closer to softmax. We propose doing so by treating the $\\alpha $ values as neural network parameters, optimized via stochastic gradients along with the other weights."
],
[
"In order to optimize $\\alpha $ automatically via gradient methods, we must compute the Jacobian of the entmax output w.r.t. $\\alpha $. Since entmax is defined through an optimization problem, this is non-trivial and cannot be simply handled through automatic differentiation; it falls within the domain of argmin differentiation, an active research topic in optimization BIBREF25, BIBREF26.",
"One of our key contributions is the derivation of a closed-form expression for this Jacobian. The next proposition provides such an expression, enabling entmax layers with adaptive $\\alpha $. To the best of our knowledge, ours is the first neural network module that can automatically, continuously vary in shape away from softmax and toward sparse mappings like sparsemax.",
"Proposition 1 Let $\\mathbf {p}^\\star \\mathop {\\mathsf {\\alpha }\\textnormal {-}\\mathsf {entmax }}(\\mathbf {z})$ be the solution of Equation DISPLAY_FORM14. Denote the distribution $\\tilde{p}_i {(p_i^\\star )^{2 - \\alpha }}{ \\sum _j(p_j^\\star )^{2-\\alpha }}$ and let $h_i -p^\\star _i \\log p^\\star _i$. The $i$th component of the Jacobian $\\mathbf {g} \\frac{\\partial \\mathop {\\mathsf {\\alpha }\\textnormal {-}\\mathsf {entmax }}(\\mathbf {z})}{\\partial \\alpha }$ is",
"proof uses implicit function differentiation and is given in Appendix SECREF10.",
"Proposition UNKREF22 provides the remaining missing piece needed for training adaptively sparse Transformers. In the following section, we evaluate this strategy on neural machine translation, and analyze the behavior of the learned attention heads."
],
[
"We apply our adaptively sparse Transformers on four machine translation tasks. For comparison, a natural baseline is the standard Transformer architecture using the softmax transform in its multi-head attention mechanisms. We consider two other model variants in our experiments that make use of different normalizing transformations:",
"1.5-entmax: a Transformer with sparse entmax attention with fixed $\\alpha =1.5$ for all heads. This is a novel model, since 1.5-entmax had only been proposed for RNN-based NMT models BIBREF14, but never in Transformers, where attention modules are not just one single component of the seq2seq model but rather an integral part of all of the model components.",
"$\\alpha $-entmax: an adaptive Transformer with sparse entmax attention with a different, learned $\\alpha _{i,j}^t$ for each head.",
"The adaptive model has an additional scalar parameter per attention head per layer for each of the three attention mechanisms (encoder self-attention, context attention, and decoder self-attention), i.e.,",
"and we set $\\alpha _{i,j}^t = 1 + \\operatornamewithlimits{\\mathsf {sigmoid}}(a_{i,j}^t) \\in ]1, 2[$. All or some of the $\\alpha $ values can be tied if desired, but we keep them independent for analysis purposes."
],
[
"Our models were trained on 4 machine translation datasets of different training sizes:",
"[itemsep=.5ex,leftmargin=2ex]",
"IWSLT 2017 German $\\rightarrow $ English BIBREF27: 200K sentence pairs.",
"KFTT Japanese $\\rightarrow $ English BIBREF28: 300K sentence pairs.",
"WMT 2016 Romanian $\\rightarrow $ English BIBREF29: 600K sentence pairs.",
"WMT 2014 English $\\rightarrow $ German BIBREF30: 4.5M sentence pairs.",
"All of these datasets were preprocessed with byte-pair encoding BIBREF31, using joint segmentations of 32k merge operations."
],
[
"We follow the dimensions of the Transformer-Base model of BIBREF0: The number of layers is $L=6$ and number of heads is $H=8$ in the encoder self-attention, the context attention, and the decoder self-attention. We use a mini-batch size of 8192 tokens and warm up the learning rate linearly until 20k steps, after which it decays according to an inverse square root schedule. All models were trained until convergence of validation accuracy, and evaluation was done at each 10k steps for ro$\\rightarrow $en and en$\\rightarrow $de and at each 5k steps for de$\\rightarrow $en and ja$\\rightarrow $en. The end-to-end computational overhead of our methods, when compared to standard softmax, is relatively small; in training tokens per second, the models using $\\alpha $-entmax and $1.5$-entmax are, respectively, $75\\%$ and $90\\%$ the speed of the softmax model."
],
[
"We report test set tokenized BLEU BIBREF32 results in Table TABREF27. We can see that replacing softmax by entmax does not hurt performance in any of the datasets; indeed, sparse attention Transformers tend to have slightly higher BLEU, but their sparsity leads to a better potential for analysis. In the next section, we make use of this potential by exploring the learned internal mechanics of the self-attention heads."
],
[
"We conduct an analysis for the higher-resource dataset WMT 2014 English $\\rightarrow $ German of the attention in the sparse adaptive Transformer model ($\\alpha $-entmax) at multiple levels: we analyze high-level statistics as well as individual head behavior. Moreover, we make a qualitative analysis of the interpretability capabilities of our models."
],
[
"Figure FIGREF37 shows the learning trajectories of the $\\alpha $ parameters of a selected subset of heads. We generally observe a tendency for the randomly-initialized $\\alpha $ parameters to decrease initially, suggesting that softmax-like behavior may be preferable while the model is still very uncertain. After around one thousand steps, some heads change direction and become sparser, perhaps as they become more confident and specialized. This shows that the initialization of $\\alpha $ does not predetermine its sparsity level or the role the head will have throughout. In particular, head 8 in the encoder self-attention layer 2 first drops to around $\\alpha =1.3$ before becoming one of the sparsest heads, with $\\alpha \\approx 2$.",
"The overall distribution of $\\alpha $ values at convergence can be seen in Figure FIGREF38. We can observe that the encoder self-attention blocks learn to concentrate the $\\alpha $ values in two modes: a very sparse one around $\\alpha \\rightarrow 2$, and a dense one between softmax and 1.5-entmax . However, the decoder self and context attention only learn to distribute these parameters in a single mode. We show next that this is reflected in the average density of attention weight vectors as well."
],
[
"For any $\\alpha >1$, it would still be possible for the weight matrices in Equation DISPLAY_FORM9 to learn re-scalings so as to make attention sparser or denser. To visualize the impact of adaptive $\\alpha $ values, we compare the empirical attention weight density (the average number of tokens receiving non-zero attention) within each module, against sparse Transformers with fixed $\\alpha =1.5$.",
"Figure FIGREF40 shows that, with fixed $\\alpha =1.5$, heads tend to be sparse and similarly-distributed in all three attention modules. With learned $\\alpha $, there are two notable changes: (i) a prominent mode corresponding to fully dense probabilities, showing that our models learn to combine sparse and dense attention, and (ii) a distinction between the encoder self-attention – whose background distribution tends toward extreme sparsity – and the other two modules, who exhibit more uniform background distributions. This suggests that perhaps entirely sparse Transformers are suboptimal.",
"The fact that the decoder seems to prefer denser attention distributions might be attributed to it being auto-regressive, only having access to past tokens and not the full sentence. We speculate that it might lose too much information if it assigned weights of zero to too many tokens in the self-attention, since there are fewer tokens to attend to in the first place.",
"Teasing this down into separate layers, Figure FIGREF41 shows the average (sorted) density of each head for each layer. We observe that $\\alpha $-entmax is able to learn different sparsity patterns at each layer, leading to more variance in individual head behavior, to clearly-identified dense and sparse heads, and overall to different tendencies compared to the fixed case of $\\alpha =1.5$."
],
[
"To measure the overall disagreement between attention heads, as a measure of head diversity, we use the following generalization of the Jensen-Shannon divergence:",
"where $\\mathbf {p}_j$ is the vector of attention weights assigned by head $j$ to each word in the sequence, and $\\mathsf {H}^\\textsc {S}$ is the Shannon entropy, base-adjusted based on the dimension of $\\mathbf {p}$ such that $JS \\le 1$. We average this measure over the entire validation set. The higher this metric is, the more the heads are taking different roles in the model.",
"Figure FIGREF44 shows that both sparse Transformer variants show more diversity than the traditional softmax one. Interestingly, diversity seems to peak in the middle layers of the encoder self-attention and context attention, while this is not the case for the decoder self-attention.",
"The statistics shown in this section can be found for the other language pairs in Appendix SECREF8."
],
[
"Previous work pointed out some specific roles played by different heads in the softmax Transformer model BIBREF33, BIBREF5, BIBREF9. Identifying the specialization of a head can be done by observing the type of tokens or sequences that the head often assigns most of its attention weight; this is facilitated by sparsity."
],
[
"One particular type of head, as noted by BIBREF9, is the positional head. These heads tend to focus their attention on either the previous or next token in the sequence, thus obtaining representations of the neighborhood of the current time step. In Figure FIGREF47, we show attention plots for such heads, found for each of the studied models. The sparsity of our models allows these heads to be more confident in their representations, by assigning the whole probability distribution to a single token in the sequence. Concretely, we may measure a positional head's confidence as the average attention weight assigned to the previous token. The softmax model has three heads for position $-1$, with median confidence $93.5\\%$. The $1.5$-entmax model also has three heads for this position, with median confidence $94.4\\%$. The adaptive model has four heads, with median confidences $95.9\\%$, the lowest-confidence head being dense with $\\alpha =1.18$, while the highest-confidence head being sparse ($\\alpha =1.91$).",
"For position $+1$, the models each dedicate one head, with confidence around $95\\%$, slightly higher for entmax. The adaptive model sets $\\alpha =1.96$ for this head."
],
[
"Due to the sparsity of our models, we are able to identify other head specializations, easily identifying which heads should be further analysed. In Figure FIGREF51 we show one such head where the $\\alpha $ value is particularly high (in the encoder, layer 1, head 4 depicted in Figure FIGREF37). We found that this head most often looks at the current time step with high confidence, making it a positional head with offset 0. However, this head often spreads weight sparsely over 2-3 neighboring tokens, when the tokens are part of the same BPE cluster or hyphenated words. As this head is in the first layer, it provides a useful service to the higher layers by combining information evenly within some BPE clusters.",
"For each BPE cluster or cluster of hyphenated words, we computed a score between 0 and 1 that corresponds to the maximum attention mass assigned by any token to the rest of the tokens inside the cluster in order to quantify the BPE-merging capabilities of these heads. There are not any attention heads in the softmax model that are able to obtain a score over $80\\%$, while for $1.5$-entmax and $\\alpha $-entmax there are two heads in each ($83.3\\%$ and $85.6\\%$ for $1.5$-entmax and $88.5\\%$ and $89.8\\%$ for $\\alpha $-entmax)."
],
[
"On the other hand, in Figure FIGREF52 we show a head for which our adaptively sparse model chose an $\\alpha $ close to 1, making it closer to softmax (also shown in encoder, layer 1, head 3 depicted in Figure FIGREF37). We observe that this head assigns a high probability to question marks at the end of the sentence in time steps where the current token is interrogative, thus making it an interrogation-detecting head. We also observe this type of heads in the other models, which we also depict in Figure FIGREF52. The average attention weight placed on the question mark when the current token is an interrogative word is $98.5\\%$ for softmax, $97.0\\%$ for $1.5$-entmax, and $99.5\\%$ for $\\alpha $-entmax.",
"Furthermore, we can examine sentences where some tendentially sparse heads become less so, thus identifying sources of ambiguity where the head is less confident in its prediction. An example is shown in Figure FIGREF55 where sparsity in the same head differs for sentences of similar length."
],
[
"Prior work has developed sparse attention mechanisms, including applications to NMT BIBREF19, BIBREF12, BIBREF20, BIBREF22, BIBREF34. BIBREF14 introduced the entmax function this work builds upon. In their work, there is a single attention mechanism which is controlled by a fixed $\\alpha $. In contrast, this is the first work to allow such attention mappings to dynamically adapt their curvature and sparsity, by automatically adjusting the continuous $\\alpha $ parameter. We also provide the first results using sparse attention in a Transformer model."
],
[
"Recent research improves the scalability of Transformer-like networks through static, fixed sparsity patterns BIBREF10, BIBREF35. Our adaptively-sparse Transformer can dynamically select a sparsity pattern that finds relevant words regardless of their position (e.g., Figure FIGREF52). Moreover, the two strategies could be combined. In a concurrent line of research, BIBREF11 propose an adaptive attention span for Transformer language models. While their work has each head learn a different contiguous span of context tokens to attend to, our work finds different sparsity patterns in the same span. Interestingly, some of their findings mirror ours – we found that attention heads in the last layers tend to be denser on average when compared to the ones in the first layers, while their work has found that lower layers tend to have a shorter attention span compared to higher layers."
],
[
"The original Transformer paper BIBREF0 shows attention visualizations, from which some speculation can be made of the roles the several attention heads have. BIBREF7 study the syntactic abilities of the Transformer self-attention, while BIBREF6 extract dependency relations from the attention weights. BIBREF8 find that the self-attentions in BERT BIBREF3 follow a sequence of processes that resembles a classical NLP pipeline. Regarding redundancy of heads, BIBREF9 develop a method that is able to prune heads of the multi-head attention module and make an empirical study of the role that each head has in self-attention (positional, syntactic and rare words). BIBREF36 also aim to reduce head redundancy by adding a regularization term to the loss that maximizes head disagreement and obtain improved results. While not considering Transformer attentions, BIBREF18 show that traditional attention mechanisms do not necessarily improve interpretability since softmax attention is vulnerable to an adversarial attack leading to wildly different model predictions for the same attention weights. Sparse attention may mitigate these issues; however, our work focuses mostly on a more mechanical aspect of interpretation by analyzing head behavior, rather than on explanations for predictions."
],
[
"We contribute a novel strategy for adaptively sparse attention, and, in particular, for adaptively sparse Transformers. We present the first empirical analysis of Transformers with sparse attention mappings (i.e., entmax), showing potential in both translation accuracy as well as in model interpretability.",
"In particular, we analyzed how the attention heads in the proposed adaptively sparse Transformer can specialize more and with higher confidence. Our adaptivity strategy relies only on gradient-based optimization, side-stepping costly per-head hyper-parameter searches. Further speed-ups are possible by leveraging more parallelism in the bisection algorithm for computing $\\alpha $-entmax.",
"Finally, some of the automatically-learned behaviors of our adaptively sparse Transformers – for instance, the near-deterministic positional heads or the subword joining head – may provide new ideas for designing static variations of the Transformer."
],
[
"This work was supported by the European Research Council (ERC StG DeepSPIN 758969), and by the Fundação para a Ciência e Tecnologia through contracts UID/EEA/50008/2019 and CMUPERI/TIC/0046/2014 (GoLocal). We are grateful to Ben Peters for the $\\alpha $-entmax code and Erick Fonseca, Marcos Treviso, Pedro Martins, and Tsvetomila Mihaylova for insightful group discussion. We thank Mathieu Blondel for the idea to learn $\\alpha $. We would also like to thank the anonymous reviewers for their helpful feedback.",
"Supplementary Material"
],
[
"Definition 1 (BIBREF23)",
"Let $\\Omega \\colon \\triangle ^d \\rightarrow {\\mathbb {R}}\\cup \\lbrace \\infty \\rbrace $ be a strictly convex regularization function. We define the prediction function $\\mathbf {\\pi }_{\\Omega }$ as"
],
[
"Lemma 1 (BIBREF14) For any $\\mathbf {z}$, there exists a unique $\\tau ^\\star $ such that",
"Proof: From the definition of $\\mathop {\\mathsf {\\alpha }\\textnormal {-}\\mathsf {entmax }}$,",
"we may easily identify it with a regularized prediction function (Def. UNKREF81):",
"We first note that for all $\\mathbf {p}\\in \\triangle ^d$,",
"From the constant invariance and scaling properties of $\\mathbf {\\pi }_{\\Omega }$ BIBREF23,",
"Using BIBREF23, noting that $g^{\\prime }(t) = t^{\\alpha - 1}$ and $(g^{\\prime })^{-1}(u) = u^{{1}{\\alpha -1}}$, yields",
"Since $\\mathsf {H}^{\\textsc {T}}_\\alpha $ is strictly convex on the simplex, $\\mathop {\\mathsf {\\alpha }\\textnormal {-}\\mathsf {entmax }}$ has a unique solution $\\mathbf {p}^\\star $. Equation DISPLAY_FORM88 implicitly defines a one-to-one mapping between $\\mathbf {p}^\\star $ and $\\tau ^\\star $ as long as $\\mathbf {p}^\\star \\in \\triangle $, therefore $\\tau ^\\star $ is also unique."
],
[
"The Euclidean projection onto the simplex, sometimes referred to, in the context of neural attention, as sparsemax BIBREF19, is defined as",
"The solution can be characterized through the unique threshold $\\tau $ such that $\\sum _i \\operatornamewithlimits{\\mathsf {sparsemax}}(\\mathbf {z})_i = 1$ and BIBREF38",
"Thus, each coordinate of the sparsemax solution is a piecewise-linear function. Visibly, this expression is recovered when setting $\\alpha =2$ in the $\\alpha $-entmax expression (Equation DISPLAY_FORM85); for other values of $\\alpha $, the exponent induces curvature.",
"On the other hand, the well-known softmax is usually defined through the expression",
"which can be shown to be the unique solution of the optimization problem",
"where $\\mathsf {H}^\\textsc {S}(\\mathbf {p}) -\\sum _i p_i \\log p_i$ is the Shannon entropy. Indeed, setting the gradient to 0 yields the condition $\\log p_i = z_j - \\nu _i - \\tau - 1$, where $\\tau $ and $\\nu > 0$ are Lagrange multipliers for the simplex constraints $\\sum _i p_i = 1$ and $p_i \\ge 0$, respectively. Since the l.h.s. is only finite for $p_i>0$, we must have $\\nu _i=0$ for all $i$, by complementary slackness. Thus, the solution must have the form $p_i = {\\exp (z_i)}{Z}$, yielding Equation DISPLAY_FORM92."
],
[
"Recall that the entmax transformation is defined as:",
"where $\\alpha \\ge 1$ and $\\mathsf {H}^{\\textsc {T}}_{\\alpha }$ is the Tsallis entropy,",
"and $\\mathsf {H}^\\textsc {S}(\\mathbf {p}):= -\\sum _j p_j \\log p_j$ is the Shannon entropy.",
"In this section, we derive the Jacobian of $\\operatornamewithlimits{\\mathsf {entmax }}$ with respect to the scalar parameter $\\alpha $."
],
[
"From the KKT conditions associated with the optimization problem in Eq. DISPLAY_FORM85, we have that the solution $\\mathbf {p}^{\\star }$ has the following form, coordinate-wise:",
"where $\\tau ^{\\star }$ is a scalar Lagrange multiplier that ensures that $\\mathbf {p}^{\\star }$ normalizes to 1, i.e., it is defined implicitly by the condition:",
"For general values of $\\alpha $, Eq. DISPLAY_FORM98 lacks a closed form solution. This makes the computation of the Jacobian",
"non-trivial. Fortunately, we can use the technique of implicit differentiation to obtain this Jacobian.",
"The Jacobian exists almost everywhere, and the expressions we derive expressions yield a generalized Jacobian BIBREF37 at any non-differentiable points that may occur for certain ($\\alpha $, $\\mathbf {z}$) pairs. We begin by noting that $\\frac{\\partial p_i^{\\star }}{\\partial \\alpha } = 0$ if $p_i^{\\star } = 0$, because increasing $\\alpha $ keeps sparse coordinates sparse. Therefore we need to worry only about coordinates that are in the support of $\\mathbf {p}^\\star $. We will assume hereafter that the $i$th coordinate of $\\mathbf {p}^\\star $ is non-zero. We have:",
"We can see that this Jacobian depends on $\\frac{\\partial \\tau ^{\\star }}{\\partial \\alpha }$, which we now compute using implicit differentiation.",
"Let $\\mathcal {S} = \\lbrace i: p^\\star _i > 0 \\rbrace $). By differentiating both sides of Eq. DISPLAY_FORM98, re-using some of the steps in Eq. DISPLAY_FORM101, and recalling Eq. DISPLAY_FORM97, we get",
"from which we obtain:",
"Finally, plugging Eq. DISPLAY_FORM103 into Eq. DISPLAY_FORM101, we get:",
"where we denote by",
"The distribution $\\tilde{\\mathbf {p}}(\\alpha )$ can be interpreted as a “skewed” distribution obtained from $\\mathbf {p}^{\\star }$, which appears in the Jacobian of $\\mathop {\\mathsf {\\alpha }\\textnormal {-}\\mathsf {entmax }}(\\mathbf {z})$ w.r.t. $\\mathbf {z}$ as well BIBREF14."
],
[
"We can write Eq. DISPLAY_FORM104 as",
"When $\\alpha \\rightarrow 1^+$, we have $\\tilde{\\mathbf {p}}(\\alpha ) \\rightarrow \\mathbf {p}^{\\star }$, which leads to a $\\frac{0}{0}$ indetermination.",
"To solve this indetermination, we will need to apply L'Hôpital's rule twice. Let us first compute the derivative of $\\tilde{p}_i(\\alpha )$ with respect to $\\alpha $. We have",
"therefore",
"Differentiating the numerator and denominator in Eq. DISPLAY_FORM107, we get:",
"with",
"and",
"When $\\alpha \\rightarrow 1^+$, $B$ becomes again a $\\frac{0}{0}$ indetermination, which we can solve by applying again L'Hôpital's rule. Differentiating the numerator and denominator in Eq. DISPLAY_FORM112:",
"Finally, summing Eq. DISPLAY_FORM111 and Eq. DISPLAY_FORM113, we get"
],
[
"To sum up, we have the following expression for the Jacobian of $\\mathop {\\mathsf {\\alpha }\\textnormal {-}\\mathsf {entmax }}$ with respect to $\\alpha $:"
]
],
"section_name": [
"Introduction",
"Background ::: The Transformer",
"Background ::: Sparse Attention",
"Background ::: Sparse Attention ::: Properties of @!START@$\\alpha $@!END@-entmax.",
"Adaptively Sparse Transformers with @!START@$\\alpha $@!END@-entmax",
"Adaptively Sparse Transformers with @!START@$\\alpha $@!END@-entmax ::: Different @!START@$\\alpha $@!END@ per head.",
"Adaptively Sparse Transformers with @!START@$\\alpha $@!END@-entmax ::: Derivatives w.r.t. @!START@$\\alpha $@!END@.",
"Experiments",
"Experiments ::: Datasets.",
"Experiments ::: Training.",
"Experiments ::: Results.",
"Analysis",
"Analysis ::: High-Level Statistics ::: What kind of @!START@$\\alpha $@!END@ values are learned?",
"Analysis ::: High-Level Statistics ::: Attention weight density when translating.",
"Analysis ::: High-Level Statistics ::: Head diversity.",
"Analysis ::: Identifying Head Specializations",
"Analysis ::: Identifying Head Specializations ::: Positional heads.",
"Analysis ::: Identifying Head Specializations ::: BPE-merging head.",
"Analysis ::: Identifying Head Specializations ::: Interrogation head.",
"Related Work ::: Sparse attention.",
"Related Work ::: Fixed sparsity patterns.",
"Related Work ::: Transformer interpretability.",
"Conclusion and Future Work",
"Acknowledgments",
"Background ::: Regularized Fenchel-Young prediction functions",
"Background ::: Characterizing the @!START@$\\alpha $@!END@-entmax mapping",
"Background ::: Connections to softmax and sparsemax",
"Jacobian of @!START@$\\alpha $@!END@-entmax w.r.t. the shape parameter @!START@$\\alpha $@!END@: Proof of Proposition @!START@UID22@!END@",
"Jacobian of @!START@$\\alpha $@!END@-entmax w.r.t. the shape parameter @!START@$\\alpha $@!END@: Proof of Proposition @!START@UID22@!END@ ::: General case of @!START@$\\alpha >1$@!END@",
"Jacobian of @!START@$\\alpha $@!END@-entmax w.r.t. the shape parameter @!START@$\\alpha $@!END@: Proof of Proposition @!START@UID22@!END@ ::: Solving the indetermination for @!START@$\\alpha =1$@!END@",
"Jacobian of @!START@$\\alpha $@!END@-entmax w.r.t. the shape parameter @!START@$\\alpha $@!END@: Proof of Proposition @!START@UID22@!END@ ::: Summary"
]
} | {
"answers": [
{
"annotation_id": [
"0b47efe11c866b21fe1767b1a0ede6228025f81d",
"2cda1de47438e485eef6459914665037b35754c4"
],
"answer": [
{
"evidence": [
"We apply our adaptively sparse Transformers on four machine translation tasks. For comparison, a natural baseline is the standard Transformer architecture using the softmax transform in its multi-head attention mechanisms. We consider two other model variants in our experiments that make use of different normalizing transformations:",
"IWSLT 2017 German $\\rightarrow $ English BIBREF27: 200K sentence pairs.",
"KFTT Japanese $\\rightarrow $ English BIBREF28: 300K sentence pairs.",
"WMT 2016 Romanian $\\rightarrow $ English BIBREF29: 600K sentence pairs.",
"WMT 2014 English $\\rightarrow $ German BIBREF30: 4.5M sentence pairs."
],
"extractive_spans": [],
"free_form_answer": "four machine translation tasks: German -> English, Japanese -> English, Romanian -> English, English -> German",
"highlighted_evidence": [
"We apply our adaptively sparse Transformers on four machine translation tasks. ",
"IWSLT 2017 German $\\rightarrow $ English BIBREF27: 200K sentence pairs.\n\nKFTT Japanese $\\rightarrow $ English BIBREF28: 300K sentence pairs.\n\nWMT 2016 Romanian $\\rightarrow $ English BIBREF29: 600K sentence pairs.\n\nWMT 2014 English $\\rightarrow $ German BIBREF30: 4.5M sentence pairs."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We apply our adaptively sparse Transformers on four machine translation tasks. For comparison, a natural baseline is the standard Transformer architecture using the softmax transform in its multi-head attention mechanisms. We consider two other model variants in our experiments that make use of different normalizing transformations:",
"Our models were trained on 4 machine translation datasets of different training sizes:",
"[itemsep=.5ex,leftmargin=2ex]",
"IWSLT 2017 German $\\rightarrow $ English BIBREF27: 200K sentence pairs.",
"KFTT Japanese $\\rightarrow $ English BIBREF28: 300K sentence pairs.",
"WMT 2016 Romanian $\\rightarrow $ English BIBREF29: 600K sentence pairs.",
"WMT 2014 English $\\rightarrow $ German BIBREF30: 4.5M sentence pairs.",
"All of these datasets were preprocessed with byte-pair encoding BIBREF31, using joint segmentations of 32k merge operations."
],
"extractive_spans": [
" four machine translation tasks",
"IWSLT 2017 German $\\rightarrow $ English BIBREF27",
"KFTT Japanese $\\rightarrow $ English BIBREF28",
"WMT 2016 Romanian $\\rightarrow $ English BIBREF29",
"WMT 2014 English $\\rightarrow $ German BIBREF30"
],
"free_form_answer": "",
"highlighted_evidence": [
"We apply our adaptively sparse Transformers on four machine translation tasks. For comparison, a natural baseline is the standard Transformer architecture using the softmax transform in its multi-head attention mechanisms. ",
"Our models were trained on 4 machine translation datasets of different training sizes:\n\n[itemsep=.5ex,leftmargin=2ex]\n\nIWSLT 2017 German $\\rightarrow $ English BIBREF27: 200K sentence pairs.\n\nKFTT Japanese $\\rightarrow $ English BIBREF28: 300K sentence pairs.\n\nWMT 2016 Romanian $\\rightarrow $ English BIBREF29: 600K sentence pairs.\n\nWMT 2014 English $\\rightarrow $ German BIBREF30: 4.5M sentence pairs.\n\nAll of these datasets were preprocessed with byte-pair encoding BIBREF31, using joint segmentations of 32k merge operations."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"fa716cd87ce6fd6905e2f23f09b262e90413167f",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"4ebc2c029ec2a7c09e062b24455d386830b0b1ba"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 1: Machine translation tokenized BLEU test results on IWSLT 2017 DE EN, KFTT JA EN, WMT 2016 RO EN and WMT 2014 EN DE, respectively.",
"We report test set tokenized BLEU BIBREF32 results in Table TABREF27. We can see that replacing softmax by entmax does not hurt performance in any of the datasets; indeed, sparse attention Transformers tend to have slightly higher BLEU, but their sparsity leads to a better potential for analysis. In the next section, we make use of this potential by exploring the learned internal mechanics of the self-attention heads."
],
"extractive_spans": [],
"free_form_answer": "On the datasets DE-EN, JA-EN, RO-EN, and EN-DE, the baseline achieves 29.79, 21.57, 32.70, and 26.02 BLEU score, respectively. The 1.5-entmax achieves 29.83, 22.13, 33.10, and 25.89 BLEU score, which is a difference of +0.04, +0.56, +0.40, and -0.13 BLEU score versus the baseline. The α-entmax achieves 29.90, 21.74, 32.89, and 26.93 BLEU score, which is a difference of +0.11, +0.17, +0.19, +0.91 BLEU score versus the baseline.",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Machine translation tokenized BLEU test results on IWSLT 2017 DE EN, KFTT JA EN, WMT 2016 RO EN and WMT 2014 EN DE, respectively.",
"We report test set tokenized BLEU BIBREF32 results in Table TABREF27. We can see that replacing softmax by entmax does not hurt performance in any of the datasets; indeed, sparse attention Transformers tend to have slightly higher BLEU, but their sparsity leads to a better potential for analysis."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"2cfd959e433f290bb50b55722370f0d22fe090b7"
]
},
{
"annotation_id": [
"88cd24b41b1f658522fcd3bb1060809343cf8f49",
"e02d66e2bb3b896a2f78591ab4984e59649cb1ec"
],
"answer": [
{
"evidence": [
"In particular, we analyzed how the attention heads in the proposed adaptively sparse Transformer can specialize more and with higher confidence. Our adaptivity strategy relies only on gradient-based optimization, side-stepping costly per-head hyper-parameter searches. Further speed-ups are possible by leveraging more parallelism in the bisection algorithm for computing $\\alpha $-entmax."
],
"extractive_spans": [
"the attention heads in the proposed adaptively sparse Transformer can specialize more and with higher confidence"
],
"free_form_answer": "",
"highlighted_evidence": [
"the attention heads in the proposed adaptively sparse Transformer can specialize more and with higher confidence",
"In particular, we analyzed how the attention heads in the proposed adaptively sparse Transformer can specialize more and with higher confidence."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We introduce sparse attention into the Transformer architecture, showing that it eases interpretability and leads to slight accuracy gains.",
"The softmax mapping (Equation DISPLAY_FORM8) is elementwise proportional to $\\exp $, therefore it can never assign a weight of exactly zero. Thus, unnecessary items are still taken into consideration to some extent. Since its output sums to one, this invariably means less weight is assigned to the relevant items, potentially harming performance and interpretability BIBREF18. This has motivated a line of research on learning networks with sparse mappings BIBREF19, BIBREF20, BIBREF21, BIBREF22. We focus on a recently-introduced flexible family of transformations, $\\alpha $-entmax BIBREF23, BIBREF14, defined as:"
],
"extractive_spans": [
"We introduce sparse attention into the Transformer architecture"
],
"free_form_answer": "",
"highlighted_evidence": [
"We introduce sparse attention into the Transformer architecture, showing that it eases interpretability and leads to slight accuracy gains.",
"The softmax mapping (Equation DISPLAY_FORM8) is elementwise proportional to $\\exp $, therefore it can never assign a weight of exactly zero. Thus, unnecessary items are still taken into consideration to some extent. Since its output sums to one, this invariably means less weight is assigned to the relevant items, potentially harming performance and interpretability BIBREF18. This has motivated a line of research on learning networks with sparse mappings BIBREF19, BIBREF20, BIBREF21, BIBREF22. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"fa716cd87ce6fd6905e2f23f09b262e90413167f",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"somewhat",
"somewhat",
"somewhat"
],
"question": [
"What tasks are used for evaluation?",
"HOw does the method perform compared with baselines?",
"How does their model improve interpretability compared to softmax transformers?"
],
"question_id": [
"5c5aeee83ea3b34f5936404f5855ccb9869356c1",
"f8c1b17d265a61502347c9a937269b38fc3fcab1",
"5913930ce597513299e4b630df5e5153f3618038"
],
"question_writer": [
"798ee385d7c8105b83b032c7acc2347588e09d61",
"798ee385d7c8105b83b032c7acc2347588e09d61",
"798ee385d7c8105b83b032c7acc2347588e09d61"
],
"search_query": [
"transformers",
"transformers",
"transformers"
],
"topic_background": [
"research",
"research",
"research"
]
} | {
"caption": [
"Figure 1: Attention distributions of different selfattention heads for the time step of the token “over”, shown to compare our model to other related work. While the sparse Transformer (Child et al., 2019) and the adaptive span Transformer (Sukhbaatar et al., 2019) only attend to words within a contiguous span of the past tokens, our model is not only able to obtain different and not necessarily contiguous sparsity patterns for each attention head, but is also able to tune its support over which tokens to attend adaptively.",
"Table 1: Machine translation tokenized BLEU test results on IWSLT 2017 DE EN, KFTT JA EN, WMT 2016 RO EN and WMT 2014 EN DE, respectively.",
"Figure 2: Trajectories of α values for a subset of the heads during training. Initialized at random, most heads become denser in the beginning, before converging. This suggests that dense attention may be more beneficial while the network is still uncertain, being replaced by sparse attention afterwards.",
"Figure 4: Distribution of attention densities (average number of tokens receiving non-zero attention weight) for all attention heads and all validation sentences. When compared to 1.5-entmax, α-entmax distributes the sparsity in a more uniform manner, with a clear mode at fully dense attentions, corresponding to the heads with low α. In the softmax case, this distribution would lead to a single bar with density 1.",
"Figure 3: Distribution of learned α values per attention block. While the encoder self-attention has a bimodal distribution of values of α, the decoder self-attention and context attention have a single mode.",
"Figure 5: Head density per layer for fixed and learned α. Each line corresponds to an attention head; lower values mean that that attention head is sparser. Learned α has higher variance.",
"Figure 6: Jensen-Shannon Divergence between heads at each layer. Measures the disagreement between heads: the higher the value, the more the heads are disagreeing with each other in terms of where to attend. Models using sparse entmax have more diverse attention than the softmax baseline.",
"Figure 9: Interrogation-detecting heads in the three models. The top sentence is interrogative while the bottom one is declarative but includes the interrogative word “what”. In the top example, these interrogation heads assign a high probability to the question mark in the time step of the interrogative word (with ≥ 97.0% probability), while in the bottom example since there is no question mark, the same head does not assign a high probability to the last token in the sentence during the interrogative word time step. Surprisingly, this head prefers a low α = 1.05, as can be seen from the dense weights. This allows the head to identify the noun phrase “Armani Polo” better.",
"Figure 7: Self-attention from the most confidently previous-position head in each model. The learned parameter in the α-entmax model is α = 1.91. Quantitatively more confident, visual inspection confirms that the adaptive head behaves more consistently.",
"Figure 8: BPE-merging head (α = 1.91) discovered in the α-entmax model. Found in the first encoder layer, this head learns to discover some subword units and combine their information, leaving most words intact. It places 99.09% of its probability mass within the same BPE cluster as the current token: more than any head in any other model.",
"Figure 10: Example of two sentences of similar length where the same head (α = 1.33) exhibits different sparsity. The longer phrase in the example on the right “a sexually transmitted disease” is handled with higher confidence, leading to more sparsity.",
"Figure 11: Histograms of α values.",
"Figure 12: Histograms of head densities.",
"Figure 13: Jensen-Shannon divergence over layers.",
"Figure 14: Head densities over layers."
],
"file": [
"1-Figure1-1.png",
"5-Table1-1.png",
"5-Figure2-1.png",
"6-Figure4-1.png",
"6-Figure3-1.png",
"7-Figure5-1.png",
"7-Figure6-1.png",
"8-Figure9-1.png",
"8-Figure7-1.png",
"8-Figure8-1.png",
"9-Figure10-1.png",
"12-Figure11-1.png",
"13-Figure12-1.png",
"14-Figure13-1.png",
"15-Figure14-1.png"
]
} | [
"What tasks are used for evaluation?",
"HOw does the method perform compared with baselines?"
] | [
[
"1909.00015-Experiments ::: Datasets.-2",
"1909.00015-Experiments ::: Datasets.-3",
"1909.00015-Experiments ::: Datasets.-1",
"1909.00015-Experiments ::: Datasets.-4",
"1909.00015-Experiments-0",
"1909.00015-Experiments ::: Datasets.-0",
"1909.00015-Experiments ::: Datasets.-5",
"1909.00015-Experiments ::: Datasets.-6"
],
[
"1909.00015-Experiments ::: Results.-0",
"1909.00015-5-Table1-1.png"
]
] | [
"four machine translation tasks: German -> English, Japanese -> English, Romanian -> English, English -> German",
"On the datasets DE-EN, JA-EN, RO-EN, and EN-DE, the baseline achieves 29.79, 21.57, 32.70, and 26.02 BLEU score, respectively. The 1.5-entmax achieves 29.83, 22.13, 33.10, and 25.89 BLEU score, which is a difference of +0.04, +0.56, +0.40, and -0.13 BLEU score versus the baseline. The α-entmax achieves 29.90, 21.74, 32.89, and 26.93 BLEU score, which is a difference of +0.11, +0.17, +0.19, +0.91 BLEU score versus the baseline."
] | 179 |
2001.01269 | Generating Word and Document Embeddings for Sentiment Analysis | Sentiments of words differ from one corpus to another. Inducing general sentiment lexicons for languages and using them cannot, in general, produce meaningful results for different domains. In this paper, we combine contextual and supervised information with the general semantic representations of words occurring in the dictionary. Contexts of words help us capture the domain-specific information and supervised scores of words are indicative of the polarities of those words. When we combine supervised features of words with the features extracted from their dictionary definitions, we observe an increase in the success rates. We try out the combinations of contextual, supervised, and dictionary-based approaches, and generate original vectors. We also combine the word2vec approach with hand-crafted features. We induce domain-specific sentimental vectors for two corpora, which are the movie domain and the Twitter datasets in Turkish. When we thereafter generate document vectors and employ the support vector machines method utilising those vectors, our approaches perform better than the baseline studies for Turkish with a significant margin. We evaluated our models on two English corpora as well and these also outperformed the word2vec approach. It shows that our approaches are cross-lingual and cross-domain. | {
"paragraphs": [
[
"Sentiment analysis has recently been one of the hottest topics in natural language processing (NLP). It is used to identify and categorise opinions expressed by reviewers on a topic or an entity. Sentiment analysis can be leveraged in marketing, social media analysis, and customer service. Although many studies have been conducted for sentiment analysis in widely spoken languages, this topic is still immature for Turkish and many other languages.",
"Neural networks outperform the conventional machine learning algorithms in most classification tasks, including sentiment analysis BIBREF0. In these networks, word embedding vectors are fed as input to overcome the data sparsity problem and make the representations of words more “meaningful” and robust. Those embeddings indicate how close the words are to each other in the vector space model (VSM).",
"Most of the studies utilise embeddings, such as word2vec BIBREF1, which take into account the syntactic and semantic representations of the words only. Discarding the sentimental aspects of words may lead to words of different polarities being close to each other in the VSM, if they share similar semantic and syntactic features.",
"For Turkish, there are only a few studies which leverage sentimental information in generating the word and document embeddings. Unlike the studies conducted for English and other widely-spoken languages, in this paper, we use the official dictionaries for this language and combine the unsupervised and supervised scores to generate a unified score for each dimension of the word embeddings in this task.",
"Our main contribution is to create original and effective word vectors that capture syntactic, semantic, and sentimental characteristics of words, and use all of this knowledge in generating embeddings. We also utilise the word2vec embeddings trained on a large corpus. Besides using these word embeddings, we also generate hand-crafted features on a review basis and create document vectors. We evaluate those embeddings on two datasets. The results show that we outperform the approaches which do not take into account the sentimental information. We also had better performances than other studies carried out on sentiment analysis in Turkish media. We also evaluated our novel embedding approaches on two English corpora of different genres. We outperformed the baseline approaches for this language as well. The source code and datasets are publicly available.",
"The paper is organised as follows. In Section 2, we present the existing works on sentiment classification. In Section 3, we describe the methods proposed in this work. The experimental results are shown and the main contributions of our proposed approach are discussed in Section 4. In Section 5, we conclude the paper."
],
[
"In the literature, the main consensus is that the use of dense word embeddings outperform the sparse embeddings in many tasks. Latent semantic analysis (LSA) used to be the most popular method in generating word embeddings before the invention of the word2vec and other word vector algorithms which are mostly created by shallow neural network models. Although many studies have been employed on generating word vectors including both semantic and sentimental components, generating and analysing the effects of different types of embeddings on different tasks is an emerging field for Turkish.",
"Latent Dirichlet allocation (LDA) is used in BIBREF2 to extract mixture of latent topics. However, it focusses on finding the latent topics of a document, not the word meanings themselves. In BIBREF3, LSA is utilised to generate word vectors, leveraging indirect co-occurrence statistics. These outperform the use of sparse vectors BIBREF4.",
"Some of the prior studies have also taken into account the sentimental characteristics of a word when creating word vectors BIBREF5, BIBREF6, BIBREF7. A model with semantic and sentiment components is built in BIBREF8, making use of star-ratings of reviews. In BIBREF9, a sentiment lexicon is induced preferring the use of domain-specific co-occurrence statistics over the word2vec method and they outperform the latter.",
"In a recent work on sentiment analysis in Turkish BIBREF10, they learn embeddings using Turkish social media. They use the word2vec algorithm, create several unsupervised hand-crafted features, generate document vectors and feed them as input into the support vector machines (SVM) approach. We outperform this baseline approach using more effective word embeddings and supervised hand-crafted features.",
"In English, much of the recent work on learning sentiment-specific embeddings relies only on distant supervision. In BIBREF11, emojis are used as features and a bi-LSTM (bidirectional long short-term memory) neural network model is built to learn sentiment-aware word embeddings. In BIBREF12, a neural network that learns word embeddings is built by using contextual information about the data and supervised scores of the words. This work captures the supervised information by utilising emoticons as features. Most of our approaches do not rely on a neural network model in learning embeddings. However, they produce state-of-the-art results."
],
[
"We generate several word vectors, which capture the sentimental, lexical, and contextual characteristics of words. In addition to these mostly original vectors, we also create word2vec embeddings to represent the corpus words by training the embedding model on these datasets. After generating these, we combine them with hand-crafted features to create document vectors and perform classification, as will be explained in Section 3.5."
],
[
"Contextual information is informative in the sense that, in general, similar words tend to appear in the same contexts. For example, the word smart is more likely to cooccur with the word hardworking than with the word lazy. This similarity can be defined semantically and sentimentally. In the corpus-based approach, we capture both of these characteristics and generate word embeddings specific to a domain.",
"Firstly, we construct a matrix whose entries correspond to the number of cooccurrences of the row and column words in sliding windows. Diagonal entries are assigned the number of sliding windows that the corresponding row word appears in the whole corpus. We then normalise each row by dividing entries in the row by the maximum score in it.",
"Secondly, we perform the principal component analysis (PCA) method to reduce the dimensionality. It captures latent meanings and takes into account high-order cooccurrence removing noise. The attribute (column) number of the matrix is reduced to 200. We then compute cosine similarity between each row pair $w_i$ and $w_j$ as in (DISPLAY_FORM3) to find out how similar two word vectors (rows) are.",
"Thirdly, all the values in the matrix are subtracted from 1 to create a dissimilarity matrix. Then, we feed the matrix as input into the fuzzy c-means clustering algorithm. We chose the number of clusters as 200, as it is considered a standard for word embeddings in the literature. After clustering, the dimension i for a corresponding word indicates the degree to which this word belongs to cluster i. The intuition behind this idea is that if two words are similar in the VSM, they are more likely to belong to the same clusters with akin probabilities. In the end, each word in the corpus is represented by a 200-dimensional vector.",
"In addition to this method, we also perform singular value decomposition (SVD) on the cooccurrence matrices, where we compute the matrix $M^{PPMI} = U\\Sigma V^{T}$. Positive pointwise mutual information (PPMI) scores between words are calculated and the truncated singular value decomposition is computed. We take into account the U matrix only for each word. We have chosen the singular value number as 200. That is, each word in the corpus is represented by a 200-dimensional vector as follows."
],
[
"In Turkish, there do not exist well-established sentiment lexicons as in English. In this approach, we made use of the TDK (Türk Dil Kurumu - “Turkish Language Institution”) dictionary to obtain word polarities. Although it is not a sentiment lexicon, combining it with domain-specific polarity scores obtained from the corpus led us to have state-of-the-art results.",
"We first construct a matrix whose row entries are corpus words and column entries are the words in their dictionary definitions. We followed the boolean approach. For instance, for the word cat, the column words occurring in its dictionary definition are given a score of 1. Those column words not appearing in the definition of cat are assigned a score of 0 for that corresponding row entry.",
"When we performed clustering on this matrix, we observed that those words having similar meanings are, in general, assigned to the same clusters. However, this similarity fails in capturing the sentimental characteristics. For instance, the words happy and unhappy are assigned to the same cluster, since they have the same words, such as feeling, in their dictionary definitions. However, they are of opposite polarities and should be discerned from each other.",
"Therefore, we utilise a metric to move such words away from each other in the VSM, even though they have common words in their dictionary definitions. We multiply each value in a row with the corresponding row word's raw supervised score, thereby having more meaningful clusters. Using the training data only, the supervised polarity score per word is calculated as in (DISPLAY_FORM4).",
"Here, $ w_{t}$ denotes the sentiment score of word $t$, $N_{t}$ is the number of documents (reviews or tweets) in which $t$ occurs in the dataset of positive polarity, $N$ is the number of all the words in the corpus of positive polarity. $N^{\\prime }$ denotes the corpus of negative polarity. $N^{\\prime }_{t}$ and $N^{\\prime }$ denote similar values for the negative polarity corpus. We perform normalisation to prevent the imbalance problem and add a small number to both numerator and denominator for smoothing.",
"As an alternative to multiplying with the supervised polarity scores, we also separately multiplied all the row scores with only +1 if the row word is a positive word, and with -1 if it is a negative word. We have observed it boosts the performance more compared to using raw scores.",
"The effect of this multiplication is exemplified in Figure FIGREF7, showing the positions of word vectors in the VSM. Those “x\" words are sentimentally negative words, those “o\" words are sentimentally positive ones. On the top coordinate plane, the words of opposite polarities are found to be close to each other, since they have common words in their dictionary definitions. Only the information concerned with the dictionary definitions are used there, discarding the polarity scores. However, when we utilise the supervised score (+1 or -1), words of opposite polarities (e.g. “happy\" and “unhappy\") get far away from each other as they are translated across coordinate regions. Positive words now appear in quadrant 1, whereas negative words appear in quadrant 3. Thus, in the VSM, words that are sentimentally similar to each other could be clustered more accurately. Besides clustering, we also employed the SVD method to perform dimensionality reduction on the unsupervised dictionary algorithm and used the newly generated matrix by combining it with other subapproaches. The number of dimensions is chosen as 200 again according to the $U$ matrix. The details are given in Section 3.4. When using and evaluating this subapproach on the English corpora, we used the SentiWordNet lexicon BIBREF13. We have achieved better results for the dictionary-based algorithm when we employed the SVD reduction method compared to the use of clustering."
],
[
"Our last component is a simple metric that uses four supervised scores for each word in the corpus. We extract these scores as follows. For a target word in the corpus, we scan through all of its contexts. In addition to the target word's polarity score (the self score), out of all the polarity scores of words occurring in the same contexts as the target word, minimum, maximum, and average scores are taken into consideration. The word polarity scores are computed using (DISPLAY_FORM4). Here, we obtain those scores from the training data.",
"The intuition behind this method is that those four scores are more indicative of a word's polarity rather than only one (the self score). This approach is fully supervised unlike the previous two approaches."
],
[
"In addition to using the three approaches independently, we also combined all the matrices generated in the previous approaches. That is, we concatenate the reduced forms (SVD - U) of corpus-based, dictionary-based, and the whole of 4-score vectors of each word, horizontally. Accordingly, each corpus word is represented by a 404-dimensional vector, since corpus-based and dictionary-based vector components are each composed of 200 dimensions, whereas the 4-score vector component is formed by four values.",
"The main intuition behind the ensemble method is that some approaches compensate for what the others may lack. For example, the corpus-based approach captures the domain-specific, semantic, and syntactic characteristics. On the other hand, the 4-scores method captures supervised features, and the dictionary-based approach is helpful in capturing the general semantic characteristics. That is, combining those three approaches makes word vectors more representative."
],
[
"After creating several embeddings as mentioned above, we create document (review or tweet) vectors. For each document, we sum all the vectors of words occurring in that document and take their average. In addition to it, we extract three hand-crafted polarity scores, which are minimum, mean, and maximum polarity scores, from each review. These polarity scores of words are computed as in (DISPLAY_FORM4). For example, if a review consists of five words, it would have five polarity scores and we utilise only three of these sentiment scores as mentioned. Lastly, we concatenate these three scores to the averaged word vector per review.",
"That is, each review is represented by the average word vector of its constituent word embeddings and three supervised scores. We then feed these inputs into the SVM approach. The flowchart of our framework is given in Figure FIGREF11. When combining the unsupervised features, which are word vectors created on a word basis, with supervised three scores extracted on a review basis, we have better state-of-the-art results."
],
[
"We utilised two datasets for both Turkish and English to evaluate our methods.",
"For Turkish, as the first dataset, we utilised the movie reviews which are collected from a popular website. The number of reviews in this movie corpus is 20,244 and the average number of words in reviews is 39. Each of these reviews has a star-rating score which is indicative of sentiment. These polarity scores are between the values 0.5 and 5, at intervals of 0.5. We consider a review to be negative it the score is equal to or lower than 2.5. On the other hand, if it is equal to or higher than 4, it is assumed to be positive. We have randomly selected 7,020 negative and 7,020 positive reviews and processed only them.",
"The second Turkish dataset is the Twitter corpus which is formed of tweets about Turkish mobile network operators. Those tweets are mostly much noisier and shorter compared to the reviews in the movie corpus. In total, there are 1,716 tweets. 973 of them are negative and 743 of them are positive. These tweets are manually annotated by two humans, where the labels are either positive or negative. We measured the Cohen's Kappa inter-annotator agreement score to be 0.82. If there was a disagreement on the polarity of a tweet, we removed it.",
"We also utilised two other datasets in English to test the cross-linguality of our approaches. One of them is a movie corpus collected from the web. There are 5,331 positive reviews and 5,331 negative reviews in this corpus. The other is a Twitter dataset, which has nearly 1.6 million tweets annotated through a distant supervised method BIBREF14. These tweets have positive, neutral, and negative labels. We have selected 7,020 positive tweets and 7,020 negative tweets randomly to generate a balanced dataset."
],
[
"In Turkish, people sometimes prefer to spell English characters for the corresponding Turkish characters (e.g. i for ı, c for ç) when writing in electronic format. To normalise such words, we used the Zemberek tool BIBREF15. All punctuation marks except “!\" and “?\" are removed, since they do not contribute much to the polarity of a document. We took into account emoticons, such as “:))\", and idioms, such as “kafayı yemek” (lose one's mind), since two or more words can express a sentiment together, irrespective of the individual words thereof. Since Turkish is an agglutinative language, we used the morphological parser and disambiguator tools BIBREF16, BIBREF17. We also performed negation handling and stop-word elimination. In negation handling, we append an underscore to the end of a word if it is negated. For example, “güzel değil\" (not beautiful) is redefined as “güzel_\" (beautiful_) in the feature selection stage when supervised scores are being computed."
],
[
"We used the LibSVM utility of the WEKA tool. We chose the linear kernel option to classify the reviews. We trained word2vec embeddings on all the four corpora using the Gensim library BIBREF18 with the skip-gram method. The dimension size of these embeddings are set at 200. As mentioned, other embeddings, which are generated utilising the clustering and the SVD approach, are also of size 200. For c-mean clustering, we set the maximum number of iterations at 25, unless it converges."
],
[
"We evaluated our models on four corpora, which are the movie and the Twitter datasets in Turkish and English. All of the embeddings are learnt on four corpora separately. We have used the accuracy metric since all the datasets are completely or nearly completely balanced. We performed 10-fold cross-validation for both of the datasets. We used the approximate randomisation technique to test whether our results are statistically significant. Here, we tried to predict the labels of reviews and assess the performance.",
"We obtained varying accuracies as shown in Table TABREF17. “3 feats\" features are those hand-crafted features we extracted, which are the minimum, mean, and maximum polarity scores of the reviews as explained in Section 3.5. As can be seen, at least one of our methods outperforms the baseline word2vec approach for all the Turkish and English corpora, and all categories. All of our approaches performed better when we used the supervised scores, which are extracted on a review basis, and concatenated them to word vectors. Mostly, the supervised 4-scores feature leads to the highest accuracies, since it employs the annotational information concerned with polarities on a word basis.",
"As can be seen in Table TABREF17, the clustering method, in general, yields the lowest scores. We found out that the corpus - SVD metric does always perform better than the clustering method. We attribute it to that in SVD the most important singular values are taken into account. The corpus - SVD technique outperforms the word2vec algorithm for some corpora. When we do not take into account the 3-feats technique, the corpus-based SVD method yields the highest accuracies for the English Twitter dataset. We show that simple models can outperform more complex models, such as the concatenation of the three subapproaches or the word2vec algorithm. Another interesting finding is that for some cases the accuracy decreases when we utilise the polarity labels, as in the case for the English Twitter dataset.",
"Since the TDK dictionary covers most of the domain-specific vocabulary used in the movie reviews, the dictionary method performs well. However, the dictionary lacks many of the words, occurring in the tweets; therefore, its performance is not the best of all. When the TDK method is combined with the 3-feats technique, we observed a great improvement, as can be expected.",
"Success rates obtained for the movie corpus are much better than those for the Twitter dataset for most of our approaches, since tweets are, in general, much shorter and noisier. We also found out that, when choosing the p value as 0.05, our results are statistically significant compared to the baseline approach in Turkish BIBREF10. Some of our subapproaches also produce better success rates than those sentiment analysis models employed in English BIBREF11, BIBREF12. We have achieved state-of-the-art results for the sentiment classification task for both Turkish and English. As mentioned, our approaches, in general, perform best in predicting the labels of reviews when three supervised scores are additionality utilised.",
"We also employed the convolutional neural network model (CNN). However, the SVM classifier, which is a conventional machine learning algorithm, performed better. We did not include the performances of CNN for embedding types here due to the page limit of the paper.",
"As a qualitative assessment of the word representations, given some query words we visualised the most similar words to those words using the cosine similarity metric. By assessing the similarities between a word and all the other corpus words, we can find the most akin words according to different approaches. Table TABREF18 shows the most similar words to given query words. Those words which are indicative of sentiment are, in general, found to be most similar to those words of the same polarity. For example, the most akin word to muhteşem (gorgeous) is 10/10, both of which have positive polarity. As can be seen in Table TABREF18, our corpus-based approach is more adept at capturing domain-specific features as compared to word2vec, which generally captures general semantic and syntactic characteristics, but not the sentimental ones."
],
[
"We have demonstrated that using word vectors that capture only semantic and syntactic characteristics may be improved by taking into account their sentimental aspects as well. Our approaches are cross-lingual and cross-domain. They can be applied to other domains and other languages than Turkish and English with minor changes.",
"Our study is one of the few ones that perform sentiment analysis in Turkish and leverages sentimental characteristics of words in generating word vectors and outperforms all the others. Any of the approaches we propose can be used independently of the others. Our approaches without using sentiment labels can be applied to other classification tasks, such as topic classification and concept mining.",
"The experiments show that even unsupervised approaches, as in the corpus-based approach, can outperform supervised approaches in classification tasks. Combining some approaches, which can compensate for what others lack, can help us build better vectors. Our word vectors are created by conventional machine learning algorithms; however, they, as in the corpus-based model, produce state-of-the-art results. Although we preferred to use a classical machine learning algorithm, which is SVM, over a neural network classifier to predict the labels of reviews, we achieved accuracies of over 90 per cent for the Turkish movie corpus and about 88 per cent for the English Twitter dataset.",
"We performed only binary sentiment classification in this study as most of the studies in the literature do. We will extend our system in future by using neutral reviews as well. We also plan to employ Turkish WordNet to enhance the generalisability of our embeddings as another future work."
],
[
"This work was supported by Boğaziçi University Research Fund Grant Number 6980D, and by Turkish Ministry of Development under the TAM Project number DPT2007K12-0610. Cem Rıfkı Aydın is supported by TÜBİTAK BIDEB 2211E."
]
],
"section_name": [
"Introduction",
"Related Work",
"Methodology",
"Methodology ::: Corpus-based Approach",
"Methodology ::: Dictionary-based Approach",
"Methodology ::: Supervised Contextual 4-scores",
"Methodology ::: Combination of the Word Embeddings",
"Methodology ::: Generating Document Vectors",
"Datasets",
"Experiments ::: Preprocessing",
"Experiments ::: Hyperparameters",
"Experiments ::: Results",
"Conclusion",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"d4096bad9c7a7e8a7e9e37f4456548a44ff63b53",
"f281d455a8764605b74efd28cd49a64adfd2215a"
],
"answer": [
{
"evidence": [
"In a recent work on sentiment analysis in Turkish BIBREF10, they learn embeddings using Turkish social media. They use the word2vec algorithm, create several unsupervised hand-crafted features, generate document vectors and feed them as input into the support vector machines (SVM) approach. We outperform this baseline approach using more effective word embeddings and supervised hand-crafted features."
],
"extractive_spans": [],
"free_form_answer": "using word2vec to create features that are used as input to the SVM",
"highlighted_evidence": [
"In a recent work on sentiment analysis in Turkish BIBREF10, they learn embeddings using Turkish social media. They use the word2vec algorithm, create several unsupervised hand-crafted features, generate document vectors and feed them as input into the support vector machines (SVM) approach. We outperform this baseline approach using more effective word embeddings and supervised hand-crafted features."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In a recent work on sentiment analysis in Turkish BIBREF10, they learn embeddings using Turkish social media. They use the word2vec algorithm, create several unsupervised hand-crafted features, generate document vectors and feed them as input into the support vector machines (SVM) approach. We outperform this baseline approach using more effective word embeddings and supervised hand-crafted features."
],
"extractive_spans": [
"use the word2vec algorithm, create several unsupervised hand-crafted features, generate document vectors and feed them as input into the support vector machines (SVM) approach"
],
"free_form_answer": "",
"highlighted_evidence": [
"They use the word2vec algorithm, create several unsupervised hand-crafted features, generate document vectors and feed them as input into the support vector machines (SVM) approach."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"83937cb549cc85bae93dd1be666a744a973a7942",
"ee576cbff7447c74aa5d688e28fae68daea1b86a"
],
"answer": [
{
"evidence": [
"The second Turkish dataset is the Twitter corpus which is formed of tweets about Turkish mobile network operators. Those tweets are mostly much noisier and shorter compared to the reviews in the movie corpus. In total, there are 1,716 tweets. 973 of them are negative and 743 of them are positive. These tweets are manually annotated by two humans, where the labels are either positive or negative. We measured the Cohen's Kappa inter-annotator agreement score to be 0.82. If there was a disagreement on the polarity of a tweet, we removed it."
],
"extractive_spans": [
"Those tweets are mostly much noisier and shorter compared to the reviews in the movie corpus. In total, there are 1,716 tweets. 973 of them are negative and 743 of them are positive."
],
"free_form_answer": "",
"highlighted_evidence": [
"The second Turkish dataset is the Twitter corpus which is formed of tweets about Turkish mobile network operators. Those tweets are mostly much noisier and shorter compared to the reviews in the movie corpus. In total, there are 1,716 tweets. 973 of them are negative and 743 of them are positive. These tweets are manually annotated by two humans, where the labels are either positive or negative."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The second Turkish dataset is the Twitter corpus which is formed of tweets about Turkish mobile network operators. Those tweets are mostly much noisier and shorter compared to the reviews in the movie corpus. In total, there are 1,716 tweets. 973 of them are negative and 743 of them are positive. These tweets are manually annotated by two humans, where the labels are either positive or negative. We measured the Cohen's Kappa inter-annotator agreement score to be 0.82. If there was a disagreement on the polarity of a tweet, we removed it.",
"We also utilised two other datasets in English to test the cross-linguality of our approaches. One of them is a movie corpus collected from the web. There are 5,331 positive reviews and 5,331 negative reviews in this corpus. The other is a Twitter dataset, which has nearly 1.6 million tweets annotated through a distant supervised method BIBREF14. These tweets have positive, neutral, and negative labels. We have selected 7,020 positive tweets and 7,020 negative tweets randomly to generate a balanced dataset."
],
"extractive_spans": [],
"free_form_answer": "one of the Twitter datasets is about Turkish mobile network operators, there are positive, neutral and negative labels and provide the total amount plus the distribution of labels",
"highlighted_evidence": [
"The second Turkish dataset is the Twitter corpus which is formed of tweets about Turkish mobile network operators. Those tweets are mostly much noisier and shorter compared to the reviews in the movie corpus. In total, there are 1,716 tweets. 973 of them are negative and 743 of them are positive. These tweets are manually annotated by two humans, where the labels are either positive or negative. We measured the Cohen's Kappa inter-annotator agreement score to be 0.82. If there was a disagreement on the polarity of a tweet, we removed it.\n\nWe also utilised two other datasets in English to test the cross-linguality of our approaches. One of them is a movie corpus collected from the web. There are 5,331 positive reviews and 5,331 negative reviews in this corpus. The other is a Twitter dataset, which has nearly 1.6 million tweets annotated through a distant supervised method BIBREF14. These tweets have positive, neutral, and negative labels. We have selected 7,020 positive tweets and 7,020 negative tweets randomly to generate a balanced dataset."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"0c776d0d1cbdcbb3f39c2748589aa20ee755adc2",
"80295b295b55c86e567a782e5d7d15f1f054d434"
],
"answer": [
{
"evidence": [
"For Turkish, as the first dataset, we utilised the movie reviews which are collected from a popular website. The number of reviews in this movie corpus is 20,244 and the average number of words in reviews is 39. Each of these reviews has a star-rating score which is indicative of sentiment. These polarity scores are between the values 0.5 and 5, at intervals of 0.5. We consider a review to be negative it the score is equal to or lower than 2.5. On the other hand, if it is equal to or higher than 4, it is assumed to be positive. We have randomly selected 7,020 negative and 7,020 positive reviews and processed only them."
],
"extractive_spans": [],
"free_form_answer": "there are 20,244 reviews divided into positive and negative with an average 39 words per review, each one having a star-rating score",
"highlighted_evidence": [
"For Turkish, as the first dataset, we utilised the movie reviews which are collected from a popular website. The number of reviews in this movie corpus is 20,244 and the average number of words in reviews is 39. Each of these reviews has a star-rating score which is indicative of sentiment. These polarity scores are between the values 0.5 and 5, at intervals of 0.5. We consider a review to be negative it the score is equal to or lower than 2.5. On the other hand, if it is equal to or higher than 4, it is assumed to be positive. We have randomly selected 7,020 negative and 7,020 positive reviews and processed only them.\n\n"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"For Turkish, as the first dataset, we utilised the movie reviews which are collected from a popular website. The number of reviews in this movie corpus is 20,244 and the average number of words in reviews is 39. Each of these reviews has a star-rating score which is indicative of sentiment. These polarity scores are between the values 0.5 and 5, at intervals of 0.5. We consider a review to be negative it the score is equal to or lower than 2.5. On the other hand, if it is equal to or higher than 4, it is assumed to be positive. We have randomly selected 7,020 negative and 7,020 positive reviews and processed only them."
],
"extractive_spans": [
"The number of reviews in this movie corpus is 20,244 and the average number of words in reviews is 39. Each of these reviews has a star-rating score which is indicative of sentiment."
],
"free_form_answer": "",
"highlighted_evidence": [
"For Turkish, as the first dataset, we utilised the movie reviews which are collected from a popular website. The number of reviews in this movie corpus is 20,244 and the average number of words in reviews is 39. Each of these reviews has a star-rating score which is indicative of sentiment."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"edd9baca583d1a7b29e850a57cadf88c8f463818",
"f39d3bb086e25eba1abe85132d0e3fdfaf24cf25"
],
"answer": [
{
"evidence": [
"After creating several embeddings as mentioned above, we create document (review or tweet) vectors. For each document, we sum all the vectors of words occurring in that document and take their average. In addition to it, we extract three hand-crafted polarity scores, which are minimum, mean, and maximum polarity scores, from each review. These polarity scores of words are computed as in (DISPLAY_FORM4). For example, if a review consists of five words, it would have five polarity scores and we utilise only three of these sentiment scores as mentioned. Lastly, we concatenate these three scores to the averaged word vector per review.",
"That is, each review is represented by the average word vector of its constituent word embeddings and three supervised scores. We then feed these inputs into the SVM approach. The flowchart of our framework is given in Figure FIGREF11. When combining the unsupervised features, which are word vectors created on a word basis, with supervised three scores extracted on a review basis, we have better state-of-the-art results."
],
"extractive_spans": [
"three hand-crafted polarity scores, which are minimum, mean, and maximum polarity scores"
],
"free_form_answer": "",
"highlighted_evidence": [
"In addition to it, we extract three hand-crafted polarity scores, which are minimum, mean, and maximum polarity scores, from each review. These polarity scores of words are computed as in (DISPLAY_FORM4). For example, if a review consists of five words, it would have five polarity scores and we utilise only three of these sentiment scores as mentioned. Lastly, we concatenate these three scores to the averaged word vector per review.\n\nThat is, each review is represented by the average word vector of its constituent word embeddings and three supervised scores. We then feed these inputs into the SVM approach. The flowchart of our framework is given in Figure FIGREF11. When combining the unsupervised features, which are word vectors created on a word basis, with supervised three scores extracted on a review basis, we have better state-of-the-art results."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"After creating several embeddings as mentioned above, we create document (review or tweet) vectors. For each document, we sum all the vectors of words occurring in that document and take their average. In addition to it, we extract three hand-crafted polarity scores, which are minimum, mean, and maximum polarity scores, from each review. These polarity scores of words are computed as in (DISPLAY_FORM4). For example, if a review consists of five words, it would have five polarity scores and we utilise only three of these sentiment scores as mentioned. Lastly, we concatenate these three scores to the averaged word vector per review."
],
"extractive_spans": [
"polarity scores, which are minimum, mean, and maximum polarity scores, from each review"
],
"free_form_answer": "",
"highlighted_evidence": [
"In addition to it, we extract three hand-crafted polarity scores, which are minimum, mean, and maximum polarity scores, from each review."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"1a763e25e791ca05ec7d4e88459aaae4288bafd0"
],
"answer": [
{
"evidence": [
"Methodology ::: Corpus-based Approach",
"Contextual information is informative in the sense that, in general, similar words tend to appear in the same contexts. For example, the word smart is more likely to cooccur with the word hardworking than with the word lazy. This similarity can be defined semantically and sentimentally. In the corpus-based approach, we capture both of these characteristics and generate word embeddings specific to a domain.",
"Methodology ::: Dictionary-based Approach",
"In Turkish, there do not exist well-established sentiment lexicons as in English. In this approach, we made use of the TDK (Türk Dil Kurumu - “Turkish Language Institution”) dictionary to obtain word polarities. Although it is not a sentiment lexicon, combining it with domain-specific polarity scores obtained from the corpus led us to have state-of-the-art results."
],
"extractive_spans": [
"generate word embeddings specific to a domain",
"TDK (Türk Dil Kurumu - “Turkish Language Institution”) dictionary to obtain word polarities"
],
"free_form_answer": "",
"highlighted_evidence": [
"Methodology ::: Corpus-based Approach\nContextual information is informative in the sense that, in general, similar words tend to appear in the same contexts.",
" In the corpus-based approach, we capture both of these characteristics and generate word embeddings specific to a domain.",
"Methodology ::: Dictionary-based Approach\nIn Turkish, there do not exist well-established sentiment lexicons as in English. In this approach, we made use of the TDK (Türk Dil Kurumu - “Turkish Language Institution”) dictionary to obtain word polarities."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"feecf756876f1cdf237effcdc40b9ff1d00031fa"
],
"answer": [
{
"evidence": [
"The effect of this multiplication is exemplified in Figure FIGREF7, showing the positions of word vectors in the VSM. Those “x\" words are sentimentally negative words, those “o\" words are sentimentally positive ones. On the top coordinate plane, the words of opposite polarities are found to be close to each other, since they have common words in their dictionary definitions. Only the information concerned with the dictionary definitions are used there, discarding the polarity scores. However, when we utilise the supervised score (+1 or -1), words of opposite polarities (e.g. “happy\" and “unhappy\") get far away from each other as they are translated across coordinate regions. Positive words now appear in quadrant 1, whereas negative words appear in quadrant 3. Thus, in the VSM, words that are sentimentally similar to each other could be clustered more accurately. Besides clustering, we also employed the SVD method to perform dimensionality reduction on the unsupervised dictionary algorithm and used the newly generated matrix by combining it with other subapproaches. The number of dimensions is chosen as 200 again according to the $U$ matrix. The details are given in Section 3.4. When using and evaluating this subapproach on the English corpora, we used the SentiWordNet lexicon BIBREF13. We have achieved better results for the dictionary-based algorithm when we employed the SVD reduction method compared to the use of clustering."
],
"extractive_spans": [
"(+1 or -1), words of opposite polarities (e.g. “happy\" and “unhappy\") get far away from each other"
],
"free_form_answer": "",
"highlighted_evidence": [
"Only the information concerned with the dictionary definitions are used there, discarding the polarity scores. However, when we utilise the supervised score (+1 or -1), words of opposite polarities (e.g. “happy\" and “unhappy\") get far away from each other as they are translated across coordinate regions."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no",
"no"
],
"question": [
"What baseline method is used?",
"What details are given about the Twitter dataset?",
"What details are given about the movie domain dataset?",
"Which hand-crafted features are combined with word2vec?",
"What word-based and dictionary-based feature are used?",
"How are the supervised scores of the words calculated?"
],
"question_id": [
"81d193672090295e687bc4f4ac1b7a9c76ea35df",
"cf171fad0bea5ab985c53d11e48e7883c23cdc44",
"2a564b092916f2fabbfe893cf13de169945ef2e1",
"0d34c0812f1e69ea33f76ca8c24c23b0415ebc8d",
"73e83c54251f6a07744413ac8b8bed6480b2294f",
"3355918bbdccac644afe441f085d0ffbbad565d7"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"search_query": [
"twitter",
"twitter",
"twitter",
"twitter",
"twitter",
"twitter"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Fig. 1. The effect of using the supervised scores of words in the dictionary algorithm. It shows how sentimentally similar word vectors get closer to each other in the VSM.",
"Fig. 2. The flowchart of our system.",
"Table 1. Accuracies for different feature sets fed as input into the SVM classifier in predicting the labels of reviews. The word2vec algorithm is the baseline method.",
"Table 2. Most similar words to given queries according to our corpus-based approach and the baseline word2vec algorithm."
],
"file": [
"5-Figure1-1.png",
"7-Figure2-1.png",
"9-Table1-1.png",
"11-Table2-1.png"
]
} | [
"What baseline method is used?",
"What details are given about the Twitter dataset?",
"What details are given about the movie domain dataset?"
] | [
[
"2001.01269-Related Work-3"
],
[
"2001.01269-Datasets-3",
"2001.01269-Datasets-2"
],
[
"2001.01269-Datasets-1"
]
] | [
"using word2vec to create features that are used as input to the SVM",
"one of the Twitter datasets is about Turkish mobile network operators, there are positive, neutral and negative labels and provide the total amount plus the distribution of labels",
"there are 20,244 reviews divided into positive and negative with an average 39 words per review, each one having a star-rating score"
] | 180 |
1708.06185 | Seernet at EmoInt-2017: Tweet Emotion Intensity Estimator | The paper describes experiments on estimating emotion intensity in tweets using a generalized regressor system. The system combines lexical, syntactic and pre-trained word embedding features, trains them on general regressors and finally combines the best performing models to create an ensemble. The proposed system stood 3rd out of 22 systems in the leaderboard of WASSA-2017 Shared Task on Emotion Intensity. | {
"paragraphs": [
[
"Twitter, a micro-blogging and social networking site has emerged as a platform where people express themselves and react to events in real-time. It is estimated that nearly 500 million tweets are sent per day . Twitter data is particularly interesting because of its peculiar nature where people convey messages in short sentences using hashtags, emoticons, emojis etc. In addition, each tweet has meta data like location and language used by the sender. It's challenging to analyze this data because the tweets might not be grammatically correct and the users tend to use informal and slang words all the time. Hence, this poses an interesting problem for NLP researchers. Any advances in using this abundant and diverse data can help understand and analyze information about a person, an event, a product, an organization or a country as a whole. Many notable use cases of the twitter can be found here.",
"Along the similar lines, The Task 1 of WASSA-2017 BIBREF0 poses a problem of finding emotion intensity of four emotions namely anger, fear, joy, sadness from tweets. In this paper, we describe our approach and experiments to solve this problem. The rest of the paper is laid out as follows: Section 2 describes the system architecture, Section 3 reports results and inference from different experiments, while Section 4 points to ways that the problem can be further explored."
],
[
"The preprocessing step modifies the raw tweets before they are passed to feature extraction. Tweets are processed using tweetokenize tool. Twitter specific features are replaced as follows: username handles to USERNAME, phone numbers to PHONENUMBER, numbers to NUMBER, URLs to URL and times to TIME. A continuous sequence of emojis is broken into individual tokens. Finally, all tokens are converted to lowercase."
],
[
"Many tasks related to sentiment or emotion analysis depend upon affect, opinion, sentiment, sense and emotion lexicons. These lexicons associate words to corresponding sentiment or emotion metrics. On the other hand, the semantic meaning of words, sentences, and documents are preserved and compactly represented using low dimensional vectors BIBREF1 instead of one hot encoding vectors which are sparse and high dimensional. Finally, there are traditional NLP features like word N-grams, character N-grams, Part-Of-Speech N-grams and word clusters which are known to perform well on various tasks.",
"Based on these observations, the feature extraction step is implemented as a union of different independent feature extractors (featurizers) in a light-weight and easy to use Python program EmoInt . It comprises of all features available in the baseline model BIBREF2 along with additional feature extractors and bi-gram support. Fourteen such feature extractors have been implemented which can be clubbed into 3 major categories:",
"[noitemsep]",
"Lexicon Features",
"Word Vectors",
"Syntax Features",
"Lexicon Features: AFINN BIBREF3 word list are manually rated for valence with an integer between -5 (Negative Sentiment) and +5 (Positive Sentiment). Bing Liu BIBREF4 opinion lexicon extract opinion on customer reviews. +/-EffectWordNet BIBREF5 by MPQA group are sense level lexicons. The NRC Affect Intensity BIBREF6 lexicons provide real valued affect intensity. NRC Word-Emotion Association Lexicon BIBREF7 contains 8 sense level associations (anger, fear, anticipation, trust, surprise, sadness, joy, and disgust) and 2 sentiment level associations (negative and positive). Expanded NRC Word-Emotion Association Lexicon BIBREF8 expands the NRC word-emotion association lexicon for twitter specific language. NRC Hashtag Emotion Lexicon BIBREF9 contains emotion word associations computed on emotion labeled twitter corpus via Hashtags. NRC Hashtag Sentiment Lexicon and Sentiment140 Lexicon BIBREF10 contains sentiment word associations computed on twitter corpus via Hashtags and Emoticons. SentiWordNet BIBREF11 assigns to each synset of WordNet three sentiment scores: positivity, negativity, objectivity. Negation lexicons collections are used to count the total occurrence of negative words. In addition to these, SentiStrength BIBREF12 application which estimates the strength of positive and negative sentiment from tweets is also added.",
"Word Vectors: We focus primarily on the word vector representations (word embeddings) created specifically using the twitter dataset. GloVe BIBREF13 is an unsupervised learning algorithm for obtaining vector representations for words. 200-dimensional GloVe embeddings trained on 2 Billion tweets are integrated. Edinburgh embeddings BIBREF14 are obtained by training skip-gram model on Edinburgh corpus BIBREF15 . Since tweets are abundant with emojis, Emoji embeddings BIBREF16 which are learned from the emoji descriptions have been used. Embeddings for each tweet are obtained by summing up individual word vectors and then dividing by the number of tokens in the tweet.",
"Syntactic Features: Syntax specific features such as Word N-grams, Part-Of-Speech N-grams BIBREF17 , Brown Cluster N-grams BIBREF18 obtained using TweetNLP project have been integrated into the system.",
"The final feature vector is the concatenation of all the individual features. For example, we concatenate average word vectors, sum of NRC Affect Intensities, number of positive and negative Bing Liu lexicons, number of negation words and so on to get final feature vector. The scaling of final features is not required when used with gradient boosted trees. However, scaling steps like standard scaling (zero mean and unit normal) may be beneficial for neural networks as the optimizers work well when the data is centered around origin.",
"A total of fourteen different feature extractors have been implemented, all of which can be enabled or disabled individually to extract features from a given tweet."
],
[
"The dev data set BIBREF19 in the competition was small hence, the train and dev sets were merged to perform 10-fold cross validation. On each fold, a model was trained and the predictions were collected on the remaining dataset. The predictions are averaged across all the folds to generalize the solution and prevent over-fitting. As described in Section SECREF6 , different combinations of feature extractors were used. After performing feature extraction, the data was then passed to various regressors Support Vector Regression, AdaBoost, RandomForestRegressor, and, BaggingRegressor of sklearn BIBREF20 . Finally, the chosen top performing models had the least error on evaluation metrics namely Pearson's Correlation Coefficient and Spearman's rank-order correlation."
],
[
"In order to find the optimal parameter values for the EmoInt system, an extensive grid search was performed through the scikit-Learn framework over all subsets of the training set (shuffled), using stratified 10-fold cross validation and optimizing the Pearson's Correlation score. Best cross-validation results were obtained using AdaBoost meta regressor with base regressor as XGBoost BIBREF21 with 1000 estimators and 0.1 learning rate. Experiments and analysis of results are presented in the next section."
],
[
"As described in Section SECREF6 various syntax features were used namely, Part-of-Speech tags, brown clusters of TweetNLP project. However, these didn't perform well in cross validation. Hence, they were dropped from the final system. While performing grid-search as mentioned in Section SECREF14 , keeping all the lexicon based features same, choice of combination of emoji vector and word vectors are varied to minimize cross validation metric. Table TABREF16 describes the results for experiments conducted with different combinations of word vectors. Emoji embeddings BIBREF16 give better results than using plain GloVe and Edinburgh embeddings. Edinburgh embeddings outperform GloVe embeddings in Joy and Sadness category but lag behind in Anger and Fear category. The official submission comprised of the top-performing model for each emotion category. This system ranked 3 for the entire test dataset and 2 for the subset of the test data formed by taking every instance with a gold emotion intensity score greater than or equal to 0.5. Post competition, experiments were performed on ensembling diverse models for improving the accuracy. An ensemble obtained by averaging the results of the top 2 performing models outperforms all the individual models."
],
[
"The relative feature importance can be assessed by the relative depth of the feature used as a decision node in the tree. Features used at the top of the tree contribute to the final prediction decision of a larger fraction of the input samples. The expected fraction of the samples they contribute to can thus be used as an estimate of the relative importance of the features. By averaging the measure over several randomized trees, the variance of the estimate can be reduced and used as a measure of relative feature importance. In Figure FIGREF18 feature importance graphs are plotted for each emotion to infer which features are playing the major role in identifying emotional intensity in tweets. +/-EffectWordNet BIBREF5 , NRC Hashtag Sentiment Lexicon, Sentiment140 Lexicon BIBREF10 and NRC Hashtag Emotion Lexicon BIBREF9 are playing the most important role."
],
[
"It is important to understand how the model performs in different scenarios. Table TABREF20 analyzes when the system performs the best and worst for each emotion. Since the features used are mostly lexicon based, the system has difficulties in capturing the overall sentiment and it leads to amplifying or vanishing intensity signals. For instance, in example 4 of fear louder and shaking lexicons imply fear but overall sentence doesn't imply fear. A similar pattern can be found in the 4 example of Anger and 3 example of Joy. The system has difficulties in understanding of sarcastic tweets, for instance, in the 3 tweet of Anger the user expressed anger but used lol which is used in a positive sense most of the times and hence the system did a bad job at predicting intensity. The system also fails in predicting sentences having deeper emotion and sentiment which humans can understand with a little context. For example, in sample 4 of sadness, the tweet refers to post travel blues which humans can understand. But with little context, it is difficult for the system to accurately estimate the intensity. The performance is poor with very short sentences as there are fewer indicators to provide a reasonable estimate."
],
[
"The paper studies the effectiveness of various affect lexicons word embeddings to estimate emotional intensity in tweets. A light-weight easy to use affect computing framework (EmoInt) to facilitate ease of experimenting with various lexicon features for text tasks is open-sourced. It provides plug and play access to various feature extractors and handy scripts for creating ensembles.",
"Few problems explained in the analysis section can be resolved with the help of sentence embeddings which take the context information into consideration. The features used in the system are generic enough to use them in other affective computing tasks on social media text, not just tweet data. Another interesting feature of lexicon-based systems is their good run-time performance during prediction, future work to benchmark the performance of the system can prove vital for deploying in a real-world setting."
],
[
"We would like to thank the organizers of the WASSA-2017 Shared Task on Emotion Intensity, for providing the data, the guidelines and timely support."
]
],
"section_name": [
"Introduction",
"Preprocessing",
"Feature Extraction",
"Regression",
"Parameter Optimization",
"Experimental Results",
"Feature Importance",
"System Limitations",
"Future Work & Conclusion",
"Acknowledgement"
]
} | {
"answers": [
{
"annotation_id": [
"c6a95153feae9b6f59783badf92b30ed25d82313"
],
"answer": [
{
"evidence": [
"We would like to thank the organizers of the WASSA-2017 Shared Task on Emotion Intensity, for providing the data, the guidelines and timely support."
],
"extractive_spans": [
"WASSA-2017 Shared Task on Emotion Intensity"
],
"free_form_answer": "",
"highlighted_evidence": [
"We would like to thank the organizers of the WASSA-2017 Shared Task on Emotion Intensity, for providing the data, the guidelines and timely support."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"0e3bbf7e353a77e03678d520901aa72b7b2eb6b2",
"77b6891039d4ce6b4d4184fc6d044a7b4cfab02c"
],
"answer": [
{
"evidence": [
"Based on these observations, the feature extraction step is implemented as a union of different independent feature extractors (featurizers) in a light-weight and easy to use Python program EmoInt . It comprises of all features available in the baseline model BIBREF2 along with additional feature extractors and bi-gram support. Fourteen such feature extractors have been implemented which can be clubbed into 3 major categories:",
"[noitemsep]",
"Lexicon Features",
"Word Vectors",
"Syntax Features"
],
"extractive_spans": [
"Fourteen "
],
"free_form_answer": "",
"highlighted_evidence": [
"Fourteen such feature extractors have been implemented which can be clubbed into 3 major categories:\n\n[noitemsep]\n\nLexicon Features\n\nWord Vectors\n\nSyntax Features"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"459c8b076787d6a741a8f4a257e048b8ec63e76d",
"8e29602095ecba55748adcb1874506797d6e98e6"
],
"answer": [
{
"evidence": [
"Word Vectors: We focus primarily on the word vector representations (word embeddings) created specifically using the twitter dataset. GloVe BIBREF13 is an unsupervised learning algorithm for obtaining vector representations for words. 200-dimensional GloVe embeddings trained on 2 Billion tweets are integrated. Edinburgh embeddings BIBREF14 are obtained by training skip-gram model on Edinburgh corpus BIBREF15 . Since tweets are abundant with emojis, Emoji embeddings BIBREF16 which are learned from the emoji descriptions have been used. Embeddings for each tweet are obtained by summing up individual word vectors and then dividing by the number of tokens in the tweet."
],
"extractive_spans": [],
"free_form_answer": "Pretrained word embeddings were not used",
"highlighted_evidence": [
"We focus primarily on the word vector representations (word embeddings) created specifically using the twitter dataset. GloVe BIBREF13 is an unsupervised learning algorithm for obtaining vector representations for words. 200-dimensional GloVe embeddings trained on 2 Billion tweets are integrated. Edinburgh embeddings BIBREF14 are obtained by training skip-gram model on Edinburgh corpus BIBREF15 . Since tweets are abundant with emojis, Emoji embeddings BIBREF16 which are learned from the emoji descriptions have been used. Embeddings for each tweet are obtained by summing up individual word vectors and then dividing by the number of tokens in the tweet."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Word Vectors: We focus primarily on the word vector representations (word embeddings) created specifically using the twitter dataset. GloVe BIBREF13 is an unsupervised learning algorithm for obtaining vector representations for words. 200-dimensional GloVe embeddings trained on 2 Billion tweets are integrated. Edinburgh embeddings BIBREF14 are obtained by training skip-gram model on Edinburgh corpus BIBREF15 . Since tweets are abundant with emojis, Emoji embeddings BIBREF16 which are learned from the emoji descriptions have been used. Embeddings for each tweet are obtained by summing up individual word vectors and then dividing by the number of tokens in the tweet."
],
"extractive_spans": [
"GloVe",
"Edinburgh embeddings BIBREF14",
"Emoji embeddings BIBREF16"
],
"free_form_answer": "",
"highlighted_evidence": [
"GloVe embeddings trained on 2 Billion tweets are integrated.",
"Edinburgh embeddings BIBREF14 are obtained by training skip-gram model on Edinburgh corpus BIBREF15 .",
"Emoji embeddings BIBREF16 which are learned from the emoji descriptions have been used."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"",
"",
""
],
"paper_read": [
"",
"",
""
],
"question": [
"what dataset was used?",
"how many total combined features were there?",
"what pretrained word embeddings were used?"
],
"question_id": [
"e48e750743aef36529fbea4328b8253dbe928b4d",
"c08aab979dcdc8f4fe8ec1337c3c8290ab13414e",
"8756b7b9ff5e87e4efdf6c2f73a0512f05b5ae3f"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"",
"",
""
]
} | {
"caption": [
"Figure 1: System Architecture",
"Table 1: Evaluation Metrics for various systems. Systems are abbreviated as following: For example Em1-Ed0-Gl1 implies Emoji embeddings and GloVe embeddings are included, Edinburgh embeddings are not included in features keeping other features same. Results marked with * corresponds to official submission. Results in bold are the best results corresponding to that metric.",
"Figure 2: Relative Feature Importance of Various Emotions",
"Table 2: Sample tweets where our system’s prediction is best and worst."
],
"file": [
"1-Figure1-1.png",
"4-Table1-1.png",
"5-Figure2-1.png",
"6-Table2-1.png"
]
} | [
"what pretrained word embeddings were used?"
] | [
[
"1708.06185-Feature Extraction-7"
]
] | [
"Pretrained word embeddings were not used"
] | 181 |
1705.01214 | A Hybrid Architecture for Multi-Party Conversational Systems | Multi-party Conversational Systems are systems with natural language interaction between one or more people or systems. From the moment that an utterance is sent to a group, to the moment that it is replied in the group by a member, several activities must be done by the system: utterance understanding, information search, reasoning, among others. In this paper we present the challenges of designing and building multi-party conversational systems, the state of the art, our proposed hybrid architecture using both rules and machine learning and some insights after implementing and evaluating one on the finance domain. | {
"paragraphs": [
[
"Back to 42 BC, the philosopher Cicero has raised the issue that although there were many Oratory classes, there were none for Conversational skills BIBREF0 . He highlighted how important they were not only for politics, but also for educational purpose. Among other conversational norms, he claimed that people should be able to know when to talk in a conversation, what to talk depending on the subject of the conversation, and that they should not talk about themselves.",
"Norms such as these may become social conventions and are not learnt at home or at school. Social conventions are dynamic and may change according to context, culture and language. In online communication, new commonsense practices are evolved faster and accepted as a norm BIBREF1 , BIBREF2 . There is not a discipline for that on elementary or high schools and there are few linguistics researchers doing research on this field.",
"On the other hand, within the Artificial Intelligence area, some Conversational Systems have been created in the past decades since the test proposed by Alan Turing in 1950. The test consists of a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from that of a human BIBREF3 . Turing proposed that a human evaluator would judge natural language conversations between a human and a machine that is designed to generate human-like responses. Since then, many systems have been created to pass the Turing's test. Some of them have won prizes, some not BIBREF4 . Although in this paper we do not focus on creating a solution that is able to build conversational systems that pass the Turing's test, we focus on NDS. From BIBREF5 , \"NDS are systems that try to improve usability and user satisfaction by imitating human behavior\". We refer to Conversational Systems as NDS, where the dialogues are expressed as natural language texts, either from artificial intelligent agents (a.k.a. bots) or from humans.",
"That said, the current popular name to systems that have the ability to make a conversation with humans using natural language is Chatbot. Chatbots are typically used in conversational systems for various practical purposes, including customer service or information acquisition. Chatbots are becoming more widely used by social media software vendors. For example, Facebook recently announced that it would make Facebook Messenger (its 900-million-user messaging app by 2016), into a full-fledged platform that allows businesses to communicate with users via chatbots. Google is also building a new mobile-messaging service that uses artificial intelligence know-how and chatbot technology. In addition, according to the Wall Street Journal, there are more than 2 billion users of mobile apps. Still, people can be reluctant to install apps. So it is believed that social messaging can be a platform and chatbots may provide a new conversational interface for interacting with online services, as chatbots are easier to build and deploy than apps BIBREF6 .",
"China seems to be the place where chatbots adoption and use is most advanced today. For example, China's popular WeChat messaging platform can take payments, scan QR codes, and integrate chatbot systems. WeChat integrates e-mail, chat, videocalls and sharing of large multimedia files. Users can book flights or hotels using a mixed, multimedia interaction with active bots. WeChat was first released in 2011 by Tecent, a Chinese online-gaming and social-media firm, and today more than 700 million people use it, being one of the most popular messaging apps in the world (The Economist 2016). WeChat has a mixture of real-live customer service agents and automated replies (Olson 2016).",
"Still, current existing chatbot engines do not properly handle a group chat with many users and many chatbots. This makes the chatbots considerably less social, which is a problem since there is a strong demand of having social chatbots that are able to provide different kinds of services, from traveling packages to finance advisors. This happens because there is a lack of methods and tools to design and engineer the coordination and mediation among chatbots and humans, as we present in Sections 2 and 3. In this paper, we refer to conversational systems that are able to interact with one or more people or chatbots in a multi-party chat as MPCS. Altogether, this paper is not meant to advance the state of the art on the norms for MPCS. Instead, the main contributions of this paper are threefold:",
"We then present some discussion and future work in the last section."
],
[
"There are plenty of challenges in conversation contexts, and even bigger ones when people and machines participate in those contexts. Conversation is a specialized form of interaction, which follows social conventions. Social interaction makes it possible to inform, context, create, ratify, refute, and ascribe, among other things, power, class, gender, ethnicity, and culture BIBREF2 . Social structures are the norms that emerge from the contact people have with others BIBREF7 , for example, the communicative norms of a negotiation, taking turns in a group, the cultural identity of a person, or power relationships in a work context.",
"Conventions, norms and patterns from everyday real conversations are applied when designing those systems to result in adoption and match user's expectations. BIBREF8 describes implicit interactions in a framework of interactions between humans and machines. The framework is based on the theory of implicit interactions which posits that people rely on conventions of interaction to communicate queries, offers, responses, and feedback to one another. Conventions and patterns drive our expectations about interactive behaviors. This framework helps designers and developers create interactions that are more socially appropriate. According to the author, we have interfaces which are based on explicit interaction and implicit ones. The explicit are the interactions or interfaces where people rely on explicit input and output, whereas implicit interactions are the ones that occur without user awareness of the computer behavior.",
"Social practices and actions are essential for a conversation to take place during the turn-by-turn moments of communication. BIBREF9 highlights that a distinguishing feature of ordinary conversation is \"the local, moment-by-moment management of the distribution of turns, of their size, and what gets done in them, those things being accomplished in the course of each current speaker's turn.\" Management of turns and subject change in each course is a situation that occurs in real life conversations based on circumstances (internal and external) to speakers in a dialogue. Nowadays, machines are not prepared to fully understand context and change the course of conversations as humans. Managing dialogues with machines is challenging, which increases even more when more than one conversational agent is part of the same conversation. Some of those challenges in the dialogue flow were addressed by BIBREF10 . According to them, we have system-initiative, user-initiative, and mixed-initiative systems.",
"In the first case, system-initiative systems restrict user options, asking direct questions, such as (Table TABREF5 ): \"What is the initial amount of investment?\" Doing so, those types of systems are more successful and easier to answer to. On the other hand, user-initiative systems are the ones where users have freedom to ask what they wish. In this context, users may feel uncertain of the capabilities of the system and starting asking questions or requesting information or services which might be quite far from the system domain and understanding capacity, leading to user frustration. There is also a mixed-initiative approach, that is, a goal-oriented dialogue which users and computers participate interactively using a conversational paradigm. Challenges of this last classification are to understand interruptions, human utterances, and unclear sentences that were not always goal-oriented.",
"The dialog in Table TABREF5 has the system initiative in a question and answer mode, while the one in Table TABREF7 is a natural dialogue system where both the user and the system take the initiative. If we add another user in the chat, then we face other challenges.",
"In Table TABREF12 , line 4, the user U1 invites another person to the chat and the system does not reply to this utterance, nor to utterances on lines 6, 7 and 8 which are the ones when only the users (wife and husband) should reply to. On the other hand, when the couple agrees on the period and initial value of the investment (line 9), then the system S1 (at the time the only system in the chat) replies indicating that it will invite more systems (chatbots) that are experts on this kind of pair INLINEFORM0 period, initial value INLINEFORM1 . They then join the chat and start interacting with each other. At the end, on line 17, the user U2 interacts with U1 and they agree with the certificate option. Then, the chatbot responsible for that, S3, is the only one that replies indicating how to invest.",
"Table TABREF12 is one example of interactions on which the chatbots require knowledge of when to reply given the context of the dialog. In general, we acknowledge that exist four dimensions of understanding and replying to an utterance in MPCS which a chatbot that interacts in a multi-party chat group should fulfill:",
"In the next section we present the state of the art and how they fullfil some of these dimensions."
],
[
"In this section we discuss the state of the art on conversational systems in three perspectives: types of interactions, types of architecture, and types of context reasoning. Then we present a table that consolidates and compares all of them.",
"ELIZA BIBREF11 was one of the first softwares created to understand natural language processing. Joseph Weizenbaum created it at the MIT in 1966 and it is well known for acting like a psychotherapist and it had only to reflect back onto patient's statements. ELIZA was created to tackle five \"fundamental technical problems\": the identification of critical words, the discovery of a minimal context, the choice of appropriate transformations, the generation of appropriate responses to the transformation or in the absence of critical words, and the provision of an ending capacity for ELIZA scripts.",
"Right after ELIZA came PARRY, developed by Kenneth Colby, who is psychiatrist at Stanford University in the early 1970s. The program was written using the MLISP language (meta-lisp) on the WAITS operating system running on a DEC PDP-10 and the code is non-portable. Parts of it were written in PDP-10 assembly code and others in MLISP. There may be other parts that require other language translators. PARRY was the first system to pass the Turing test - the psychiatrists were able to make the correct identification only 48 percent of the time, which is the same as a random guessing.",
"A.L.I.C.E. (Artificial Linguistic Internet Computer Entity) BIBREF12 appeared in 1995 but current version utilizes AIML, an XML language designed for creating stimulus-response chat robots BIBREF13 . A.L.I.C.E. bot has, at present, more than 40,000 categories of knowledge, whereas the original ELIZA had only about 200. The program is unable to pass the Turing test, as even the casual user will often expose its mechanistic aspects in short conversations.",
"Cleverbot (1997-2014) is a chatbot developed by the British AI scientist Rollo Carpenter. It passed the 2011 Turing Test at the Technique Techno-Management Festival held by the Indian Institute of Technology Guwahati. Volunteers participate in four-minute typed conversations with either Cleverbot or humans, with Cleverbot voted 59.3 per cent human, while the humans themselves were rated just 63.3 per cent human BIBREF14 ."
],
[
"Although most part of the research literature focuses on the dialogue of two persons, the reality of everyday life interactions shows a substantial part of multi-user conversations, such as in meetings, classes, family dinners, chats in bars and restaurants, and in almost every collaborative or competitive environment such as hospitals, schools, offices, sports teams, etc. The ability of human beings to organize, manage, and (mostly) make productive such complex interactive structures which are multi-user conversations is nothing less than remarkable. The advent of social media platforms and messaging systems such as WhatsApp in the first 15 years of the 21st century expanded our ability as a society to have asynchronous conversations in text form, from family and friends chatgroups to whole nations conversing in a highly distributed form in social media BIBREF15 .",
"In this context, many technological advances in the early 2010s in natural language processing (spearheaded by the IBM Watson's victory in Jeopardy BIBREF16 ) spurred the availability in the early 2010s of text-based chatbots in websites and apps (notably in China BIBREF17 ) and spoken speech interfaces such as Siri by Apple, Cortana by Microsoft, Alexa by Amazon, and Allo by Google. However, the absolute majority of those chatbot deployments were in contexts of dyadic dialog, that is, a conversation between a single chatbot with a single user. Most of the first toolkits for chatbot design and development of this initial period implicit assume that an utterance from the user is followed by an utterance of the chatbot, which greatly simplifies the management of the conversation as discussed in more details later. Therefore, from the interaction point of view, there are two types: 1) one in which the chatbot was designed to chat with one person or chatbot, and 2) other in which the chatbot can interact with more than two members in the chat.",
"Dyadic Chatbot",
"A Dyadic Chatbot is a chatbot that does know when to talk. If it receives an utterance, it will always handle and try to reply to the received utterance. For this chatbot to behave properly, either there are only two members in the chat, and the chatbot is one of them, or there are more, but the chatbot replies only when its name or nickname is mentioned. This means that a dyadic chatbot does not know how to coordinate with many members in a chat group. It lacks the social ability of knowing when it is more suitable to answer or not. Also, note that we are not considering here the ones that would use this social ability as an advantage in the conversation, because if the chatbot is doing with this intention, it means that the chatbot was designed to be aware of the social issues regarding a chat with multiple members, which is not the case of a dyadic chatbot. Most existing chatbots, from the first system, ELIZA BIBREF11 , until modern state-of-the-art ones fall into this category.",
"Multiparty Conversations",
"In multiparty conversations between people and computer systems, natural language becomes the communication protocol exchanged not only by the human users, but also among the bots themselves. When every actor, computer or user, understands human language and is able to engage effectively in a conversation, a new, universal computer protocol of communication is feasible, and one for which people are extremely good at.",
"There are many differences between dyadic and multiparty conversations, but chiefly among them is turn-taking, that is, how a participant determines when it is appropriate to make an utterance and how that is accomplished. There are many social settings, such as assemblies, debates, one-channel radio communications, and some formal meetings, where there are clear and explicit norms of who, when, and for long a participant can speak.",
"The state of the art for the creation of chatbots that can participate on multiparty conversations currently is a combination of the research on the creation of chatbots and research on the coordination or governance of multi-agents systems. A definition that mixes both concepts herein present is: A chatbot is an agent that interacts through natural language. Although these areas complement each other, there is a lack of solutions for creating multiparty-aware chatbots or governed chatbots, which can lead to higher degree of system trust.",
"Multi-Dyadic Chatbots",
"Turn-taking in generic, multiparty spoken conversation has been studied by, for example, Sacks et al. BIBREF18 . In broad terms, it was found that participants in general do not overlap their utterances and that the structure of the language and the norms of conversation create specific moments, called transition-relevance places, where turns can occur. In many cases, the last utterances make clear to the participants who should be the next speaker (selected-next-speaker), and he or she can take that moment to start to talk. Otherwise, any other participant can start speaking, with preference for the first starter to get the turn; or the current speaker can continue BIBREF18 .",
"A key part of the challenge is to determine whether the context of the conversation so far have or have not determined the next speaker. In its simplest form, a vocative such as the name of the next speaker is uttered. Also, there is a strong bias towards the speaker before the current being the most likely candidate to be the next speaker.",
"In general the detection of transition-relevance places and of the selected-next-speaker is still a challenge for speech-based machine conversational systems. However, in the case of text message chats, transition-relevance places are often determined by the acting of posting a message, so the main problem facing multiparty-enabled textual chatbots is in fact determining whether there is and who is the selected-next-speaker. In other words, chatbots have to know when to shut up. Bohus and Horowitz BIBREF19 have proposed a computational probabilistic model for speech-based systems, but we are not aware of any work dealing with modeling turn-taking in textual chats.",
"Coordination of Multi-Agent Systems",
"A multi-agent system (MAS) can be defined as a computational environment in which individual software agents interact with each other, in a cooperative manner, or in a competitive manner, and sometimes autonomously pursuing their individual goals. During this process, they access the environment's resources and services and occasionally produce results for the entities that initiated these software agents. As the agents interact in a concurrent, asynchronous and decentralized manner, this kind of system can be categorized as a complex system BIBREF20 .",
"Research in the coordination of multi-agent systems area does not address coordination using natural dialogue, as usually all messages are structured and formalized so the agents can reason and coordinate themselves. On the other hand, chatbots coordination have some relations with general coordination mechanisms of multi-agent systems in that they specify and control interactions between agents. However, chatbots coordination mechanisms is meant to regulate interactions and actions from a social perspective, whereas general coordination languages and mechanisms focus on means for expressing synchronization and coordination of activities and exchange of information, at a lower computational level.",
"In open multi-agent systems the development takes place without a centralized control, thus it is necessary to ensure the reliability of these systems in a way that all the interactions between agents will occur according to the specification and that these agents will obey the specified scenario. For this, these applications must be built upon a law-governed architecture.",
"Minsky published the first ideas about laws in 1987 BIBREF21 . Considering that a law is a set of norms that govern the interaction, afterwards, he published a seminal paper with the Law-Governed Interaction (LGI) conceptual model about the role of interaction laws on distributed systems BIBREF22 . Since then, he conducted further work and experimentation based on those ideas BIBREF23 . Although at the low level a multiparty conversation system is a distributed system and the LGI conceptual model can be used in a variety of application domains, it is composed of abstractions basically related to low level information about communication issues of distributed systems (like the primitives disconnected, reconnected, forward, and sending or receiving of messages), lacking the ability to express high level information of social systems.",
"Following the same approach, the Electronic Institution (EI) BIBREF24 solution also provides support for interaction norms. An EI has a set of high-level abstractions that allow for the specification of laws using concepts such as agent roles, norms and scenes.",
"Still at the agent level but more at the social level, the XMLaw description language and the M-Law framework BIBREF25 BIBREF26 were proposed and developed to support law-governed mechanism. They implement a law enforcement approach as an object-oriented framework and it allows normative behavior through the combination between norms and clocks. The M-Law framework BIBREF26 works by intercepting messages exchanged between agents, verifying the compliance of the messages with the laws and subsequently redirecting the message to the real addressee, if the laws allow it. If the message is not compliant, then the mediator blocks the message and applies the consequences specified in the law, if any. They are called laws in the sense that they enforce the norms, which represent what can be done (permissions), what cannot be done (prohibitions) and what must be done (obligations).",
"Coordinated Aware Chatbots in a Multiparty Conversation",
"With regard to chatbot engines, there is a lack of research directed to building coordination laws integrated with natural language. To the best of our knowledge, the architecture proposed in this paper is the first one in the state of the art designed to support the design and development of coordinated aware chatbots in a multiparty conversation."
],
[
"There are mainly three types of architectures when building conversational systems: totally rule-oriented, totally data-oriented, and a mix of rules and data-oriented.",
"Rule-oriented",
"A rule-oriented architecture provides a manually coded reply for each recognized utterance. Classical examples of rule-based chatbots include Eliza and Parry. Eliza could also extract some words from sentences and then create another sentence with these words based on their syntatic functions. It was a rule-based solution with no reasoning. Eliza could not \"understand\" what she was parsing. More sophisticated rule-oriented architectures contain grammars and mappings for converting sentences to appropriate sentences using some sort of knowledge. They can be implemented with propositional logic or first-order logic (FOL). Propositional logic assumes the world contains facts (which refer to events, phenomena, symptoms or activities). Usually, a set of facts (statements) is not sufficient to describe a domain in a complete manner. On the other hand, FOL assumes the world contains Objects (e.g., people, houses, numbers, etc.), Relations (e.g. red, prime, brother of, part of, comes between, etc.), and Functions (e.g. father of, best friend, etc.), not only facts as in propositional logic. Moreover, FOL contains predicates, quantifiers and variables, which range over individuals (which are domain of discourse).",
"Prolog (from French: Programmation en Logique) was one of the first logic programming languages (created in the 1970s), and it is one of the most important languages for expressing phrases, rules and facts. A Prolog program consists of logical formulas and running a program means proving a theorem. Knowledge bases, which include rules in addition to facts, are the basis for most rule-oriented chatbots created so far.",
"In general, a rule is presented as follows: DISPLAYFORM0 ",
"Prolog made it possible to perform the language of Horn clauses (implications with only one conclusion). The concept of Prolog is based on predicate logic, and proving theorems involves a resolute system of denials. Prolog can be distinguished from classic programming languages due to its possibility of interpreting the code in both a procedural and declarative way. Although Prolog is a set of specifications in FOL, it adopts the closed-world assumption, i.e. all knowledge of the world is present in the database. If a term is not in the database, Prolog assumes it is false.",
"In case of Prolog, the FOL-based set of specifications (formulas) together with the facts compose the knowledge base to be used by a rule-oriented chatbot. However an Ontology could be used. For instance, OntBot BIBREF27 uses mapping technique to transform ontologies and knowledge into relational database and then use that knowledge to drive its chats. One of the main issues currently facing such a huge amount of ontologies stored in a database is the lack of easy to use interfaces for data retrieval, due to the need to use special query languages or applications.",
"In rule-oriented chatbots, the degree of intelligent behavior depends on the knowledge base size and quality (which represents the information that the chatbot knows), poor ones lead to weak chatbot responses while good ones do the opposite. However, good knowledge bases may require years to be created, depending on the domain.",
"Data-oriented",
"As opposed to rule-oriented architectures, where rules have to be explicitly defined, data-oriented architectures are based on learning models from samples of dialogues, in order to reproduce the behavior of the interaction that are observed in the data. Such kind of learning can be done by means of machine learning approach, or by simply extracting rules from data instead of manually coding them.",
"Among the different technologies on which these system can be based, we can highlight classical information retrieval algorithms, neural networks BIBREF28 , Hidden Markov Models (HMM) BIBREF29 , and Partially Observable Markov Decision Process (POMDP) BIBREF30 . Examples include Cleverbot and Tay BIBREF31 . Tay was a chatbot developed by Microsoft that after one day live learning from interaction with teenagers on Twitter, started replying impolite utterances. Microsoft has developed others similar chatbots in China (Xiaoice) and in Japan (Rinna). Microsoft has not associated its publications with these chatbots, but they have published a data-oriented approach BIBREF32 that proposes a unified multi-turn multi-task spoken language understanding (SLU) solution capable of handling multiple context sensitive classification (intent determination) and sequence labeling (slot filling) tasks simultaneously. The proposed architecture is based on recurrent convolutional neural networks (RCNN) with shared feature layers and globally normalized sequence modeling components.",
"A survey of public available corpora for can be found in BIBREF33 . A corpus can be classified into different categories, according to: the type of data, whether it is spoken dialogues, transcripts of spoken dialogues, or directly written; the type of interaction, if it is human-human or human-machine; and the domain, whether it is restricted or unconstrained. Two well-known corpora are the Switchboard dataset, which consists of transcripts of spoken, unconstrained, dialogues, and the set of tasks for the Dialog State Tracking Challenge (DSTC), which contain more constrained tasks, for instance the restaurant and travel information sets.",
"Rule and Data-oriented",
"The model of learning in current A.L.I.C.E. BIBREF13 is incremental or/and interactive learning because a person monitors the robot's conversations and creates new AIML content to make the responses more appropriate, accurate, believable, \"human\", or whatever he/she intends. There are algorithms for automatic detection of patterns in the dialogue data and this process provides the person with new input patterns that do not have specific replies yet, permitting a process of almost continuous supervised refinement of the bot.",
"As already mentioned, A.L.I.C.E. consists of roughly 41,000 elements called categories which is the basic unit of knowledge in AIML. Each category consists of an input question, an output answer, and an optional context. The question, or stimulus, is called the pattern. The answer, or response, is called the template. The two types of optional context are called that and topic. The keyword that refers to the robot's previous utterance. The AIML pattern language consists only of words, spaces, and the wildcard symbols \"_\" and \"*\". The words may consist only of letters and numerals. The pattern language is case invariant. Words are separated by a single space, and the wildcard characters function like words, similar to the initial pattern matching strategy of the Eliza system. More generally, AIML tags transform the reply into a mini computer program which can save data, activate other programs, give conditional responses, and recursively call the pattern matcher to insert the responses from other categories. Most AIML tags in fact belong to this template side sublanguage BIBREF13 .",
"AIML language allows:",
"Symbolic reduction: Reduce complex grammatical forms to simpler ones.",
"Divide and conquer: Split an input into two or more subparts, and combine the responses to each.",
"Synonyms: Map different ways of saying the same thing to the same reply.",
"Spelling or grammar corrections: the bot both corrects the client input and acts as a language tutor.",
"Detecting keywords anywhere in the input that act like triggers for a reply.",
"Conditionals: Certain forms of branching to produce a reply.",
"Any combination of (1)-(6).",
"When the bot chats with multiple clients, the predicates are stored relative to each client ID. For example, the markup INLINEFORM0 set name INLINEFORM1 \"name\" INLINEFORM2 Matthew INLINEFORM3 /set INLINEFORM4 stores the string Matthew under the predicate named \"name\". Subsequent activations of INLINEFORM5 get name=\"name\" INLINEFORM6 return \"Matthew\". In addition, one of the simple tricks that makes ELIZA and A.L.I.C.E. so believable is a pronoun swapping substitution. For instance:",
"U: My husband would like to invest with me.",
"S: Who else in your family would like to invest with you?"
],
[
"According to the types of intentions, conversational systems can be classified into two categories: a) goal-driven or task oriented, and b) non-goal-driven or end-to-end systems.",
"In a goal-driven system, the main objective is to interact with the user so that back-end tasks, which are application specific, are executed by a supporting system. As an example of application we can cite technical support systems, for instance air ticket booking systems, where the conversation system must interact with the user until all the required information is known, such as origin, destination, departure date and return date, and the supporting system must book the ticket. The most widely used approaches for developing these systems are Partially-observed Decision Processes (POMDP) BIBREF30 , Hidden Markov Models (HMM) BIBREF29 , and more recently, Memory Networks BIBREF28 . Given that these approaches are data-oriented, a major issue is to collect a large corpora of annotated task-specific dialogs. For this reason, it is not trivial to transfer the knowledge from one to domain to another. In addition, it might be difficult to scale up to larger sets of tasks.",
"Non-goal-driven systems (also sometimes called reactive systems), on the other hand, generate utterances in accordance to user input, e.g. language learning tools or computer games characters. These systems have become more popular in recent years, mainly owning to the increase of popularity of Neural Networks, which is also a data-oriented approach. The most recent state of the art to develop such systems have employed Recurrent Neural Networs (RNN) BIBREF34 , Dynamic Context-Sensitive Generation BIBREF35 , and Memory Networks BIBREF36 , just to name a few. Nevertheless, probabilistic methods such as Hidden Topic Markov Models (HTMM) BIBREF37 have also been evaluated. Goal-driven approach can create both pro-active and reactive chatbots, while non-goal-driven approach creates reactive chatbots. In addition, they can serve as a tool to goal-driven systems as in BIBREF28 . That is, when trained on corpora of a goal-driven system, non-goal-driven systems can be used to simulate user interaction to then train goal-driven models."
],
[
"A dialogue system may support the context reasoning or not. Context reasoning is necessary in many occasions. For instance, when partial information is provided the chatbot needs to be able to interact one or more turns in order to get the complete information in order to be able to properly answer. In BIBREF38 , the authors present a taxonomy of errors in conversational systems. The ones regarding context-level errors are the ones that are perceived as the top-10 confusing and they are mainly divided into the following:",
"Excess/lack of proposition: the utterance does not provide any new proposition to the discourse context or provides excessive information than required.",
"Contradiction: the utterance contains propositions that contradict what has been said by the system or by the user.",
"Non-relevant topic: the topic of the utterance is irrelevant to the current context such as when the system suddenly jumps to some other topic triggered by some particular word in the previous user utterance.",
"Unclear relation: although the utterance might relate to the previous user utterance, its relation to the current topic is unclear.",
"Topic switch error: the utterance displays the fact that the system missed the switch in topic by the user, continuing with the previous topic.",
"Rule-oriented",
"In the state of the art most of the proposed approaches for context reasoning lies on rules using logics and knowledge bases as described in the Rule-oriented architecture sub-section. Given a set of facts extracted from the dialogue history and encoded in, for instance, FOL statements, a queries can be posed to the inference engine and produce answers. For instance, see the example in Table TABREF37 . The sentences were extracted from BIBREF36 (which does not use a rule-oriented approach), and the first five statements are their respective facts. The system then apply context reasoning for the query Q: Where is the apple.",
"If statements above are received on the order present in Table TABREF37 , if the query Q: Where is the apple is sent, the inference engine will produce the answer A: Bedroom (i.e., the statement INLINEFORM0 is found by the model and returned as True).",
"Nowadays, the most common way to store knowledge bases is on triple stores, or RDF (Resource Description Framework) stores. A triple store is a knowledge base for the storage and retrieval of triples through semantic queries. A triple is a data entity composed of subject-predicate-object, like \"Sam is at the kitchen\" or \"The apple is with Sam\", for instance. A query language is needed for storing and retrieving data from a triple store. While SPARQL is a RDF query language, Rya is an open source scalable RDF triple store built on top of Apache Accumulo. Originally developed by the Laboratory for Telecommunication Sciences and US Naval Academy, Rya is currently being used by a number of american government agencies for storing, inferencing, and querying large amounts of RDF data.",
"A SPARQL query has a SQL-like syntax for finding triples matching specific patterns. For instance, see the query below. It retrieves all the people that works at IBM and lives in New York:",
"SELECT ?people ",
"WHERE {",
"?people <worksAt> <IBM> . ",
"?people <livesIn> <New York>. ",
"}",
"Since triple stores can become huge, Rya provides three triple table index BIBREF39 to help speeding up queries:",
"SPO: subject, predicate, object",
"POS: predicate, object, subject",
"OSP: object, subject, predicate",
"While Rya is an example of an optimized triple store, a rule-oriented chatbot can make use of Rya or any triple store and can call the semantic search engine in order to inference and generate proper answers.",
"Data-oriented",
"Recent papers have used neural networks to predict the next utterance on non-goal-driven systems considering the context, for instance with Memory Networks BIBREF40 . In this work BIBREF36 , for example the authors were able to generate answers for dialogue like below:",
"Sam walks into the kitchen.",
"Sam picks up an apple.",
"Sam walks into the bedroom.",
"Sam drops the apple.",
"Q: Where is the apple?",
"A: Bedroom",
"Sukhbaatar's model represents the sentence as a vector in a way that the order of the words matter, and the model encodes the temporal context enhancing the memory vector with a matrix that contains the temporal information. During the execution phase, Sukhbaatar's model takes a discrete set of inputs INLINEFORM0 that are to be stored in the memory, a query INLINEFORM1 , and outputs an answer INLINEFORM2 . Each of the INLINEFORM3 , INLINEFORM4 , and INLINEFORM5 contains symbols coming from a dictionary with INLINEFORM6 words. The model writes all INLINEFORM7 to the memory up to a fixed buffer size, and then finds a continuous representation for the INLINEFORM8 and INLINEFORM9 . The continuous representation is then processed via multiple computational steps to output INLINEFORM10 . This allows back propagation of the error signal through multiple memory accesses back to the input during training. Sukhbaatar's also presents the state of the art of recent efforts that have explored ways to capture dialogue context, treated as long-term structure within sequences, using RNNs or LSTM-based models. The problem of this approach is that it is has not been tested for goal-oriented systems. In addition, it works with a set of sentences but not necessary from multi-party bots."
],
[
"Regarding current platforms to support the development of conversational systems, we can categorize them into three types: platforms for plugging chatbots, for creating chatbots and for creating service chatbots. The platforms for plugging chatbots provide tools for integrating them another system, like Slack. The chatbots need to receive and send messages in a specific way, which depends on the API and there is no support for actually helping on building chatbots behavior with natural language understanding. The platforms for creating chatbots mainly provide tools for adding and training intentions together with dialogue flow specification and some entities extraction, with no reasoning support. Once the models are trained and the dialogue flow specified, the chatbots are able to reply to the received intention. The platforms for creating service chatbots provide the same functionalities as the last one and also provide support for defining actions to be executed by the chatbots when they are answering to an utterance. Table TABREF43 summarizes current platforms on the market accordingly to these categories. There is a lack on platforms that allow to create chatbots that can be coordinated in a multiparty chat with governance or mediation."
],
[
"In this section the conceptual architecture for creating a hybrid rule and machine learning-based MPCS is presented. The MPCS is defined by the the entities and relationships illustrated in Fig. FIGREF44 which represents the chatbot's knowledge. A Chat Group contains several Members that join the group with a Role. The role may constrain the behavior of the member in the group. Chatbot is a type of Role, to differentiate from persons that may also join with different roles. For instance, a person may assume the role of the owner of the group, or someone that was invited by the owner, or a domain role like an expert, teacher or other.",
"When a Member joins the Chat Group, it/he/she can send Utterances. The Member then classifies each Utterance with an Intent which has a Speech Act. The Intent class, Speech Act class and the Intent Flow trigger the Action class to be executed by the Member that is a Chatbot. The Chatbots associated to the Intention are the only ones that know how to answer to it by executing Actions. The Action, which implements one Speech Act, produces answers which are Utterances, so, for instance, the Get_News action produces an Utterance for which Intention's speech act is Inform_News. The Intent Flow holds the intent's class conversation graph which maps the dialog state as a decision tree. The answer's intention class is mapped in the Intent Flow as a directed graph G defined as following: DISPLAYFORM0 ",
"From the graph definitions, INLINEFORM0 is for vertices and INLINEFORM1 is for relations, which are the arrows in the graph. And in Equation EQREF46 :",
" INLINEFORM0 is the set of intentions pairs,",
" INLINEFORM0 is the set of paths to navigate through the intentions,",
" INLINEFORM0 is the arrow's head, and",
" INLINEFORM0 is the arrow's tail.",
"This arrow represents a turn from an utterance with INLINEFORM0 intention class which is replying to an utterance with INLINEFORM1 intention class to the state which an utterance with INLINEFORM2 intention's class is sent.",
" INLINEFORM0 is the intention class of the answer to be provided to the received INLINEFORM1 intention class.",
"In addition, each intent's class may refer to many Entities which, in turn, may be associated to several Features. For instance, the utterance",
"\"I would like to invest USD10,000 in Savings Account for 2 years\"",
"contains one entity – the Savings Account's investment option – and two features – money (USD10,000) and period of time (2 years). The Intent Flow may need this information to choose the next node which will give the next answer. Therefore, if the example is changed a little, like",
"\"I would like to invest in Savings Account\",",
" INLINEFORM0 is constrained by the \"Savings Account\" entity which requires the two aforementioned features. Hence, a possible answer by one Member of the group would be",
"\"Sure, I can simulate for you, what would be the initial amount and the period of time of the investment?\"",
"With these conceptual model's elements, a MPCS system can be built with multiple chatbots. Next subsection further describes the components workflow."
],
[
"Figure FIGREF48 illustrates from the moment that an utterance is sent in a chat group to the moment a reply is generated in the same chat group, if the case. One or more person may be in the chat, while one or more chatbots too. There is a Hub that is responsible for broadcasting the messages to every Member in the group, if the case. The flow starts when a Member sends the utterance which goes to the Hub and, if allowed, is broadcasted. Many or none interactions norms can be enforced at this level depending on the application. Herein, a norm can be a prohibition, obligation or permission to send an utterance in the chat group.",
"Once the utterance is broadcasted, a chatbot needs to handle the utterance. In order to properly handle it, the chatbot parses the utterance with several parsers in the Parsing phase: a Topic Classifier, the Dependency Parsing, which includes Part-of-Speech tags and semantics tags, and any other that can extract metadata from the utterance useful for the reasoning. All these metadata, together with more criteria, may be used in the Frame parsing which is useful for context reasoning. All knowledge generated in this phase can be stored in the Context. Then, the Intent Classifier tries to detect the intent class of the utterance. If detected, the Speech Act is also retrieved. And an Event Detector can also check if there is any dialog inconsistency during this phase.",
"After that, the Filtering phase receives the object containing the utterance, the detected intent, and all metadata extracted so far and decides if an action should be performed to reply to the utterance. If yes, it is sent to the Acting phase which performs several steps. First the Action Classifier tries to detect the action to be performed. If detected, the action is executed. At this step, many substeps may be performed, like searching for an information, computing maths, or generating information to create the answer. All of this may require a search in the Context and also may activate the Error Detector component to check if the dialog did not run into a wrong state. After the answer is generated, the Filtering phase is activated again to check if the reply should be really sent. If so, it is sent to the Hub which, again may check if it can be broadcasted before actually doing it.",
"The topic classifier is domain-dependent and is not mandatory. However, the chatbot can better react when the intent or action is not detected, which means that it does not know how to answer. Many reasons might explain this situation: the set of intents might be incomplete, the action might not have produced the proper behavior, misunderstanding might happen, or the chatbot was not designed to reply to a particular topic. In all cases, it must be able to produce a proper reply, if needed. Because this might happen throughout the workflow, the sooner that information is available, the better the chatbot reacts. Therefore it is one of the first executions of the flow.",
"Dependency is the notion that linguistic units, e.g. words, are connected to each other by directed links. The (finite) verb is taken to be the structural center of clause structure. All other syntactic units (words) are either directly or indirectly connected to the verb in terms of the directed links, which are called dependencies. It is a one-to-one correspondence: for every element (e.g. word or morph) in the sentence, there is exactly one node in the structure of that sentence that corresponds to that element. The result of this one-to-one correspondence is that dependency grammars are word (or morph) grammars. All that exist are the elements and the dependencies that connect the elements into a structure. Dependency grammar (DG) is a class of modern syntactic theories that are all based on the dependency relation.",
"Semantic dependencies are understood in terms of predicates and their arguments. Morphological dependencies obtain between words or parts of words. To facilitate future research in unsupervised induction of syntactic structure and to standardize best-practices, a tagset that consists of twelve universal part-of-speech categories was proposed BIBREF41 .",
"Dependency parsers have to cope with a high degree of ambiguity and nondeterminism which let to different techniques than the ones used for parsing well-defined formal languages. Currently the mainstream approach uses algorithms that derive a potentially very large set of analyses in parallel and when disambiguation is required, this approach can be coupled with a statistical model for parse selection that ranks competing analyses with respect to plausibility BIBREF42 .",
"Below we present an example of a dependency tree for the utterance:",
"\"I want to invest 10 thousands\":",
"[s]\"\"blue[l]:black",
" \"tree\": {",
" \"want VERB ROOT\": {",
" \"I PRON nsubj\": {},",
" \"to ADP mark\": {},",
" \"invest VERB nmod\": {",
" \"thousands NOUN nmod\": {",
" \"10 NUM nummod\": {}",
" }",
" }",
" }",
" ",
"The coarse-grained part-of-speech tags, or morphological dependencies (VERB, PRON, ADP, NOUN and NUM) encode basic grammatical categories and the grammatical relationships (nsubjs, nmod, nummod) are defined in the Universal Dependencies project BIBREF41 .",
"In this module, the dependency tree generated is used together with a set of rules to extract information that is saved in the context using the Frame-based approach. This approach fills the slots of the frame with the extracted values from the dialogue. Frames are like forms and slots are like fields. Using the knowledge's conceptual model, the fields are represented by the elements Entities and Features. In the dependency tree example, the entity would be the implicit concept: the investment option, and the feature is the implicit concept: initial amount – 10 thousands. Since the goal is to invest, and there are more entities needed for that (i.e., fields to be filled), the next node in the Intention Flow tree would return an utterance which asks the user the time of investment, if he/she has not provided yet.",
"This module could be implemented using different approaches according to the domain, but tree search algorithms will be necessary for doing the tree parsing.",
"The Intent Classifier component aims at recognizing not only the Intent but the goal of the utterance sent by a Member, so it can properly react. The development of an intent classifier needs to deal with the following steps:",
"i) the creation of dataset of intents, to train the classification algorithm;",
"ii) the design of a classification algorithm that provides a reasonable level of accuracy;",
"iii) the creation of dataset of trees of intents, the same as defined in i) and which maps the goals;",
"iv) the design of a plan-graph search algorithm that maps the goal's state to a node in the graph;",
"There are several approaches to create training sets for dialogues: from an incremental approach to crowdsourcing. In the incremental approach, the Wizard of Oz method can be applied to a set of potential users of the system, and from this study, a set of questions that the users asked posted to the `fake' system can be collected. These questions have to be manually classified into a set of intent classes, and used to train the first version of the system. Next, this set has to be increased both in terms of number of classes and samples per class.",
"The Speech Act Classifier can be implemented with many speech act classes as needed by the application. The more classes, the more flexible the chatbot is. It can be built based on dictionaries, or a machine learning-based classifier can be trained. In the table below we present the main and more general speech act classes BIBREF43 used in the Chatbots with examples to differentiate one from another:",
"There are at least as many Action classes as Speech Act classes, since the action is the realization of a speech act. The domain specific classes, like \"Inform_News\" or \"Inform_Factoids\", enhance the capabilities of answering of a chatbot.",
"The Action Classifier can be defined as a multi-class classifier with the tuple DISPLAYFORM0 ",
"where INLINEFORM0 is the intent of the answer defined in ( EQREF46 ), INLINEFORM1 is the speech act of the answer, INLINEFORM2 and INLINEFORM3 are the sets of entities and features needed to produce the answer, if needed, respectively.",
"This component is responsible for implementing the behavior of the Action class. Basic behaviors may exist and be shared among different chatbots, like the ones that implement the greetings, thanks or not understood. Although they can be generic, they can also be personalized to differentiate the bot from one another and also to make it more \"real\". Other cases like to inform, to send a query, to send a proposal, they are all domain-dependent and may require specific implementations.",
"Anyway, figure FIGREF59 shows at the high level the generic workflow. If action class detected is task-oriented, the system will implement the execution of the task, say to guide a car, to move a robot's arm, or to compute the return of investments. The execution might need to access an external service in the Internet in order to complete the task, like getting the inflation rate, or the interest rate, or to get information about the environment, or any external factor. During the execution or after it is finished, the utterance is generated as a reply and, if no more tasks are needed, the action execution is finished.",
"In the case of coordination of chatbots, one or more chatbots with the role of mediator may exist in the chat group and, at this step, it is able to invite one or more chatbots to the chat group and it is also able to redirect the utterances, if the case.",
"The proposed architecture addresses the challenges as the following:",
"What is the message/utterance about? solved by the Parsing phase;",
"Who should reply to the utterance? solved by the Filtering phase and may be enforced by the Hub;",
"How the reply should be built/generated? solved by the Acting phase;",
"When should the reply be sent? may be solved by the Acting phase or the Filtering phase, and may be enforced by the Hub;",
"And Context and Logging module is used throughout all phases."
],
[
"This section presents one implementation of the conceptual architecture presented in last section. After many refactorings, a framework called SABIA (Speech-Act-Based Intelligent Agents Framework) has been developed and CognIA (Cognitive Investment Advisor) application has been developed as an instantiation of SABIA framework. We present then the accuracy and some automated tests of this implementation."
],
[
"SABIA was developed on top of Akka middleware. Akka is a toolkit and runtime that implements the Actor Model on the JVM. Akka's features, like concurrency, distributed computing, resilience, and message-passing were inspired by Erlang's actor model BIBREF44 BIBREF45 . The actor model is a mathematical model of concurrent computation that treats \"actors\" as the universal primitives of concurrent computation. In response to a message that it receives, an actor can: make local decisions, create more actors, send more messages, and determine how to respond to the next received message. Actors may modify private state, but can only affect each other through messages (avoiding the need for any locks). Akka middleware manages the actors life cycle and actors look up by theirs name, locally or remotely.",
"We implemented each Member of the Chat Group as an Actor by extending the UntypedActor class of Akka middleware. Yet, we created and implemented the SabiaActorSystem as a singleton (i.e., a single instance of it exists in the system) BIBREF46 that has a reference to Akka's ActorSystem. During SabiaActorSystem's initialization, all parsers that consume too much memory during their initialization to load models are instantiated as singletons. In this way, we save time on their calls during the runtime. Moreover, all chat group management, like to join or leave the group, or to broadcast or filter a message at the Hub level is implemented in SABIA through the Chat Group behavior.",
"This is implemented in SABIA as a singleton that is initialized during the SabiaActorSystem initialization with the URL of the service that implements the dependency parsing and is used on each utterance's arrival through the execution of the tagUtterance method. The service must retrieve a JSON Object with the dependency tree which is then parsed using depth-first search.",
"SABIA does not support invariants for frame parsing. We are leaving this task to the instantiated application.",
"There are two intent classifiers that can be loaded with trained models in order to be ready to be used at runtime: the 1-nearest-neighbor (1NN) and the SVM-based classifier.",
"SABIA implements the Action Classifier assuming that the application uses a relational database with a data schema that implements the conceptual model presented in Figure FIGREF44 . Then the invariants parts that use SQL are already present and the application only needs to implement the database connection and follow the required data schema.",
"SABIA provides partial implemented behavior for the Action through the Template method design pattern BIBREF46 , which implements the invariants parts of the action execution and leaves placeholders for customization."
],
[
"We developed CognIA, which is an instantiation of Sabia framework. A conversation is composed of a group chat that can contain multiple users and multiple chatbots. This example, in particular, has a mediator that can help users on financial matters, more specifically on investment options. For example, consider the following dialogue in the table below:",
"The Table TABREF71 shows an example that uses the mixed-initiative dialogue strategy, and a dialogue mediator to provide coordination control. In this example of an application, there are many types of intentions that should be answered: Q&A (question and answer) about definitions, investment options, and about the current finance indexes, simulation of investments, which is task-oriented and requires computation, and opinions, which can be highly subjective.",
"In Table SECREF72 , we present the interaction norms that were needed in Cognia. The Trigger column describes the event that triggers the Behavior specified in the third column. The Pre-Conditions column specifies what must happen in order to start the behavior execution. So, for instance, line 2, when the user sends an utterance in the chat group, an event is triggered and, if the utterance's topic is CDB (Certificate of Deposit which is a fixed rate investment) or if it is about the Savings Account investment option and the speech act is not Query_Calculation and the CDB and Savings Account members are not in the chat, then the behavior is activated. The bot members that implement these behaviors are called cdbguru and poupancaguru. Therefore these names are used when there is a mention.",
"Note that these interactions norms are not explicitly defined as obligations, permissions, and prohibitions. They are implict from the behavior described. During this implementation, we did not worry about explicitly defining the norms, because the goal was to evaluate the overall architecture, not to enhance the state of the art on norms specification for conversational systems. In addition, CognIA has only the presented interaction norms defined in Table SECREF72 , which is a very small set that that does not required model checking or verification of conflicts.",
"|p2cm|p5.0cm|p5.0cm|Cognia Interaction NormsCognia Interaction Norms",
"Trigger Pre-Conditions Behavior",
"On group chat creation Cognia chatbot is available Cognia chatbot joins the chat with the mediator role and user joins the chat with the owner_user role",
"On utterance sent by user Utterance's topic is CDB (cdbguru) or Savings Account (poupancaguru) and speech act is not Query_Calculation and they are not in the chat Cognia invites experts to the chat and repeats the utterance to them",
"On utterance sent by user Utterance's topic is CDB (cdbguru) or Savings Account (poupancaguru) and speech act is not Query_Calculation and they are in the chat Cognia waits for while and cdbguru or poupancaguru respectively handles the utterance. If they don't understand, they don't reply",
"On utterance sent by the experts If Cognia is waiting for them and has received both replies Cognia does not wait anymore",
"On utterance sent Utterance mentions cdbguru or poupancaguru cdbguru or poupancaguru respectively handles the utterance",
"On utterance sent Utterance mentions cdbguru or poupancaguru and they don't reply after a while and speech act is Query_Calculation Cognia sends I can only chat about investments...",
"On utterance sent Utterance mentions cdbguru or poupancaguru and they don't reply after while and speech act is not Query_Calculation Cognia sends I didn't understand",
"On utterance sent Utterance's speech act is Query_Calculation and period or initial amount of investment were not specified Cognia asks the user the missing information",
"On utterance sent Utterance's speech act is Query_Calculation and period and initial amount of investment were specified and the experts are not in the chat Cognia invites experts to the chat and repeats the utterance to them",
"On utterance sent Utterance's speech act is Query_Calculation and period and initial amount of investment were specified and the experts are in the chat Cognia repeats the utterance to experts",
"On utterance sent Utterance's speech act is Query_Calculation Cognia extracts variables and saves the context",
"On utterance sent Utterance's speech act is Query_Calculation and the experts are in the chat and the experts are mentioned Experts extract information, save in the context, compute calculation and send information",
"On utterance sent Utterance's speech act is Inform_Calculation and Cognia received all replies Cognia compares the results and inform comparison",
"On utterance sent Utterance mentions a chatbot but has no other text The chatbot replies How can I help you?",
"On utterance sent Utterance is not understood and speech act is Question The chatbot replies I don't know... I can only talk about topic X",
"On utterance sent Utterance is not understood and speech act is not Question The chatbot replies I didn't understand",
"On utterance sent Utterance's speech act is one of { Greetings, Thank, Bye } All chatbots reply to utterance",
"On group chat end All chatbots leave the chat, and the date and time of the end of chat is registered",
"",
"We instantiated SABIA to develop CognIA as follows: the Mediator, Savings Account, CDB and User Actors are the Members of the Chat Group. The Hub was implemented using two servers: Socket.io and Node.JS which is a socket client of the Socket.io server. The CognIA system has also one Socket Client for receiving the broadcast and forwarding to the Group Chat Manager. The former will actually do the broadcast to every member after enforcing the norms that applies specified in Table SECREF72 . Each Member will behave according to this table too. For each user of the chat group, on a mobile or a desktop, there is its corresponding actor represented by the User Actor in the figure. Its main job is to receive Akka's broadcast and forward to the Socket.io server, so it can be finally propagated to the users.",
"All the intents, actions, factual answers, context and logging data are saved in DashDB (a relational Database-as-a-Service system). When an answer is not retrieved, a service which executes the module Search Finance on Social Media on a separate server is called. This service was implemented with the assumption that finance experts post relevant questions and answers on social media. Further details are explained in the Action execution sub-section.",
"We built a small dictionary-based topic classifier to identify if an utterance refers to finance or not, and if it refers to the two investment options (CDB or Savings Account) or not.",
"The dependency parsing is extremely important for computing the return of investment when the user sends an utterance with this intention. Our first implementation used regular expressions which led to a very fragile approach. Then we used a TensorFlow implementation BIBREF47 of a SyntaxNet model for Portuguese and used it to generate the dependency parse trees of the utterances. The SyntaxNet model is a feed-forward neural network that operates on a task-specific transition system and achieves the state-of-the-art on part-of-speech tagging, dependency parsing and sentence compression results BIBREF48 . Below we present output of the service for the utterance:",
"\"I want to invest 10 thousands in 40 months\":",
"[s]\"\"blue[l]:black",
"{ \"original\": \"I would like to invest 10 thousands in 40 months\",",
" \"start_pos\": [",
" 23,",
" 32],",
" \"end_pos\": [",
" 27,",
" 33],",
" \"digits\": [",
" 10000,",
" 40],",
" \"converted\": \"I would like to invest 10000 in 40 months\",",
" \"tree\": {",
" \"like VERB ROOT\": {",
" \"I PRON nsubj\": {},",
" \"would MD aux\":{",
" \"invest VERB xcomp\":{",
" \"to TO aux\": {},",
" \"10000 NUM dobj\": {},",
" \"in IN prep\": {",
" \"months NOUN pobj\":{",
" \"40 NUM num\": {}}}}}}}",
"The service returns a JSON Object containing six fields: original, start_pos, end_pos, digits, converted and tree. The original field contains the original utterance sent to the service. The converted field contains the utterance replaced with decimal numbers, if the case (for instance, \"10 thousands\" was converted to \"10000\" and replaced in the utterance). The start_pos and end_pos are arrays that contain the start and end char positions of the numbers in the converted utterance. While the tree contains the dependency parse tree for the converted utterance.",
"Given the dependency tree, we implemented the frame parsing which first extracts the entities and features from the utterance and saves them in the context. Then, it replaces the extracted entities and features for reserved characters.",
"extract_period_of_investment (utteranceTree) [1] [t] numbersNodes INLINEFORM0 utteranceTree.getNumbersNodes(); [t] foreach(numberNode in numbersNodes) do [t] parentsOfNumbersNode INLINEFORM1 numbersNode.getParents() [t] foreach(parent in parentsOfNumbersNodes) do [t] if ( parent.name contains { \"day\", \"month\", \"year\"} ) then [t] parentOfParent INLINEFORM2 parent.getParent() [t] if ( parentOfParent is not null and",
"parentOfParent.getPosTag==Verb and",
"parentOfParent.name in investmentVerbsSet ) then [t] return numberNode",
"Therefore an utterance like \"I would like to invest 10 thousands in 3 years\" becomes \"I would like to invest #v in #dt years\". Or \"10 in 3 years\" becomes \"#v in #dt years\", and both intents have the same intent class.",
"For doing that we implemented a few rules using a depth-first search algorithm combined with the rules as described in Algorithm UID79 , Algorithm UID79 and Algorithm UID79 . Note that our parser works only for short texts on which the user's utterance mentions only one period of time and/ or initial amount of investment in the same utterance.",
"extract_initial_amount_of_investment (utteranceTree) [1] [t] numbersNodes INLINEFORM0 utteranceTree.getNumbersNodes(); [t] foreach(numberNode in numbersNodes) do [t] parentsOfNumbersNode INLINEFORM1 numbersNode.getParents() [t] foreach(parent in parentsOfNumbersNodes) do [t] if ( parent.name does not contain { \"day\", \"month\", \"year\"} ) then [t] return numberNode",
"frame_parsing(utterance, utteranceTree) [1] [t] period INLINEFORM0 extract_period_of_investment (utteranceTree) [t] save_period_of_investment(period) [t] value INLINEFORM1 extract_initial_amount_of_investment (utteranceTree) [t] save_initial_amount_of_investment(value) [t] new_intent INLINEFORM2 replace(new_intent, period, \"#dt\") [t] new_intent INLINEFORM3 replace(new_intent, value, \"#v\")",
"In CognIA we have complemented the speech act classes with the ones related to the execution of specific actions. Therefore, if the chatbot needed to compute the return of investment, then, once it is computed, the speech act of the reply will be Inform_Calculation and the one that represents the query for that is Query_Calculation. In table TABREF81 we list the specific ones.",
"Given that there is no public dataset available with financial intents in Portuguese, we have employed the incremental approach to create our own training set for the Intent Classifier. First, we applied the Wizard of Oz method and from this study, we have collected a set of 124 questions that the users asked. Next, after these questions have been manually classified into a set of intent classes, and used to train the first version of the system, this set has been increased both in terms of number of classes and samples per class, resulting in a training set with 37 classes of intents, and a total 415 samples, with samples per class ranging from 3 to 37.",
"We have defined our classification method based on features extracted from word vectors. Word vectors consist of a way to encode the semantic meaning of the words, based on their frequency of co-occurrence. To create domain-specific word vectors, a set of thousand documents are needed related to desired domain. Then each intent from the training set needs to be encoded with its corresponding mean word vector. The mean word vector is then used as feature vector for standard classifiers.",
"We have created domain-specific word vectors by considering a set 246,945 documents, corresponding to of 184,001 Twitter posts and and 62,949 news articles, all related to finance .",
"The set of tweets has been crawled from the feeds of blog users who are considered experts in the finance domain. The news article have been extracted from links included in these tweets. This set contained a total of 63,270,124 word occurrences, with a vocabulary of 97,616 distinct words. With the aforementioned word vectors, each intent from the training set has been encoded with its corresponding mean word vector. The mean word vector has been then used as feature vector for standard classifiers.",
"As the base classifier, we have pursued with a two-step approach. In the first step, the main goal was to make use of a classifier that could be easily retrained to include new classes and intents. For this reason, the first implementation of the system considered an 1-nearest-neighbor (1NN) classifier, which is simply a K-nearest-neighbor classifier with K set to 1. With 1NN, the developer of the system could simply add new intents and classes to the classifier, by means of inserting new lines into the database storing the training set. Once we have considered that the training set was stable enough for the system, we moved the focus to an approach that would be able to provide higher accuracy rates than 1NN. For this, we have employed Support Vector Machines (SVM) with a Gaussian kernel, the parameters of which are optimized by means of a grid search.",
"We manually mapped the intent classes used to train the intent classifier to action classes and the dependent entities and features, when the case. Table TABREF85 summarizes the number of intent classes per action class that we used in CognIA.",
"For the majority of action classes we used SABIA's default behavior. For instance, Greet and Bye actions classes are implemented using rapport, which means that if the user says \"Hi\" the chatbot will reply \"Hi\".",
"The Search News, Compute and Ask More classes are the ones that require specific implemention for CognIA as following:",
"Search News: search finance on social media service BIBREF49 , BIBREF50 receives the utterance as input, searches on previously indexed Twitter data for finance for Portuguese and return to the one which has the highest score, if found.",
"Ask More: If the user sends an utterance that has the intention class of simulating the return of investment, while not all variables to compute the return of investment are extracted from the dialogue, the mediator keeps asking the user these information before it actually redirects the query to the experts. This action then checks the state of the context given the specified intent flow as described in ( EQREF46 ) and ( EQREF57 ) in section SECREF4 to decide which variables are missing. For CognIA we manually added these dependencies on the database.",
"Compute: Each expert Chatbot implements this action according to its expertise. The savings account chatbot computes the formula ( EQREF90 ) and the certificate of deposit computes the formula ( EQREF92 ). Both are currently formulas for estimating in Brazil. DISPLAYFORM0 ",
"where INLINEFORM0 is the return of investment for the savings account, INLINEFORM1 is the initial value of investment, INLINEFORM2 is the savings account interest rate and INLINEFORM3 is the savings account rate base. DISPLAYFORM0 ",
"where INLINEFORM0 is the return of investment for certificate of deposit, INLINEFORM1 is the initial value of investment, INLINEFORM2 is the Interbank Deposit rate (DI in Portuguese), INLINEFORM3 is the ID's percentual payed by the bank (varies from 90% to 120%), INLINEFORM4 is the number of days the money is invested, and finally INLINEFORM5 is the income tax on the earnings."
],
[
"In Table TABREF95 we present the comparison of some distinct classification on the first version of the training set, i.e. the set used to deploy the first classifier into the system. Roughly speaking, the 1NN classifier has been able to achieve a level of accuracy that is higher than other well-known classifiers, such as Logistic Regression and Naïve Bayes, showing that 1NN is suitable as a development classifier. Nevertheless, a SVM can perform considerable better than 1NN, reaching accuracies of about 12 percentage points higher, which demonstrates that this type of base classifier is a better choice to be deployed once the system is stable enough. It is worth mentioning that these results consider the leave-one-out validation procedure, given the very low number of samples in some classes.",
"As we mentioned, the use of an 1NN classifier has allowed the developer of the system to easily add new intent classes and samples whenever they judged it necessary, so that the system could present new actions, or the understanding of the intents could be improved. As a consequence, the initial training set grew from 37 to 63 classes, and from 415 to 659 samples, with the number of samples per class varying from 2 to 63. For visualizing the impact on the accuracy of the system, in Table TABREF96 we present the accuracy of the same classifiers used in the previous evaluation, in the new set. In this case, we observe some drop in accuracy for 1NN, showing that this classifier suffers in dealing with scalability. On the other hand, SVM has shown to scale very well to more classes and samples, since its accuracy kept at a very similar level than that with the other set, with a difference of about only 1 percentage point."
],
[
"In this section, we describe the validation framework that we created for integration tests. For this, we developed it as a new component of SABIA's system architecture and it provides a high level language which is able to specify interaction scenarios that simulate users interacting with the deployed chatbots. The system testers provide a set of utterances and their corresponding expected responses, and the framework automatically simulates users interacting with the bots and collect metrics, such as time taken to answer an utterance and other resource consumption metrics (e.g., memory, CPU, network bandwidth). Our goal was to: (i) provide a tool for integration tests, (ii) to validate CognIA's implementation, and (iii) to support the system developers in understanding the behavior of the system and which aspects can be improved. Thus, whenever developers modify the system's source code, the modifications must first pass the automatic test before actual deployment.",
"The test framework works as follows. The system testers provide a set INLINEFORM0 of dialogues as input. Each dialogue INLINEFORM1 INLINEFORM2 INLINEFORM3 is an ordered set whose elements are represented by INLINEFORM4 , where INLINEFORM5 is the user utterance and INLINEFORM6 is an ordered set of pairs INLINEFORM7 that lists each response INLINEFORM8 each chatbot INLINEFORM9 should respond when the user says INLINEFORM10 . For instance, Table UID98 shows a typical dialogue ( INLINEFORM11 ) between a user and the CognIA system. Note that we are omitting part of the expected answer with \"...\" just to better visualize the content of the table.",
"|p3.6cmp0.4cmp4.5cmp3.2cm|Content of dialogue INLINEFORM0 (example of dialogue in CognIA)Content of dialogue INLINEFORM1 (example of dialogue in CognIA",
"User utterance INLINEFORM0 rId Expected response INLINEFORM1 Chatbot INLINEFORM2 ",
"gray!25 hello 1 Hello Mediator",
"white what is cdb? 2 @CDBExpert what is cdb? Mediator",
"white 3 CDB is a type of investment that... CDB Expert",
"gray!25 which is better: cdb or savings account? 4 I found a post in the social media for.... Mediator",
"white i would like to invest R$ 50 in six months 5 @SavingsAccountExpert and @CDBExpert, could you do a simulation... Mediator",
"white 6 If you invest in Savings Account, ... Savings Account Exp.",
"white 7 If you invest in CDB,... CDB Expert",
"white 8 Thanks Mediator",
"white 9 @User, there is no significant difference.. Mediator",
"gray!25 so i want to invest R$ 10000 in 2 years 10 @SavingsAccountExpert and @CDBExpert, could you do a simulation... Mediator",
"gray!25 11 If you invest in Savings Account,... Savings Account Exp.",
"gray!25 12 If you invest in CDB,... CDB Expert",
"gray!25 13 Thanks Mediator",
"gray!25 14 @User, in that case, it is better... Mediator",
"white what if i invest R$10,000 in 5 years? 15 @SavingsAccountExpert and @CDBExpert, could you do a simulation... Mediator",
"white 16 If you invest in Saving Account,... Savings Account Exp.",
"white 17 If you invest in CDB,... CDB Expert",
"white 18 Thanks Mediator",
"white 19 @User, in that case, it is better... Mediator",
"gray!25 how about 15 years? 20 @SavingsAccountExpert and @CDBExpert, could you do a simulation... Mediator",
"gray!25 21 If you invest in Savings Account,... Savings Account Exp",
"gray!25 22 If you invest in CDB,... CDB Expert",
"gray!25 23 Thanks Mediator",
"gray!25 24 @User, in that case, it is better... Mediator",
"white and 50,0000? 25 @SavingsAccountExpert and @CDBExpert, could you do a simulation... Mediator",
"white 26 If you invest in Savings Account,... Savings Account Exp.",
"white 27 If you invest in CDB,... CDB Expert",
"white 28 Thanks Mediator",
"white 29 @User, in that case, it is better.. Mediator",
"gray!25 I want to invest in 50,000 for 15 years in CDB 30 Sure, follow this link to your bank... Mediator",
"white thanks 31 You are welcome. Mediator",
"",
"The testers may also inform the number of simulated users that will concurrently use the platform. Then, for each simulated user, the test framework iterates over the dialogues in INLINEFORM0 and iterates over the elements in each dialogue to check if each utterance INLINEFORM1 was correctly responded with INLINEFORM2 by the chatbot INLINEFORM3 . There is a maximum time to wait. If a bot does not respond with the expected response in the maximum time (defined by the system developers), an error is raised and the test is stopped to inform the developers about the error. Otherwise, for each correct bot response, the test framework collects the time taken to respond that specific utterance by the bot for that specific user and continues for the next user utterance. Other consumption resource metrics (memory, CPU, network, disk). The framework is divided into two parts. One part is responsible to gather resource consumption metrics and it resides inside SABIA. The other part works as clients (users) interacting with the server. It collects information about time taken to answer utterances and checks if the utterances are answered correctly.",
"By doing this, we not only provide a sanity test for the domain application (CognIA) developed in SABIA framework, but also a performance analysis of the platform. That is, we can: validate if the bots are answering correctly given a pre-defined set of known dialogues, check if they are answering in a reasonable time, and verify the amount of computing resources that were consumed to answer a specific utterance. Given the complexity of CognIA, these tests enable debugging of specific features like: understanding the amount of network bandwidth to use external services, or analyzing CPU and memory consumption when responding a specific utterance. The later may happen when the system is performing more complex calculations to indicate the investment return, for instance.",
"CognIA was deployed on IBM Bluemix, a platform as a service, on a Liberty for Java Cloud Foundry app with 3 GB RAM memory and 1 GB disk. Each of the modules shown in Figure FIGREF74 are deployed on separate Bluemix servers. Node.JS and Socket.IO servers are both deployed as Node Cloud Foundry apps, with 256 MB RAM memory and 512 MB disk each. Search Finance on Social Media is on a Go build pack Cloud Foundry app with 128 MB RAM memory and 128 GB disk. For the framework part that simulates clients, we instantiated a virtual machine with 8 cores on IBM's SoftLayer that is able to communicate with Bluemix. Then, the system testers built two dialogues, i.e., INLINEFORM0 . The example shown in Table UID98 is the dialogue test INLINEFORM1 . For the dialogue INLINEFORM2 , although it also has 10 utterances, the testers varied some of them to check if other utterances in the finance domain (different from the ones in dialogue INLINEFORM3 ) are being responded as expected by the bots. Then, two tests are performed and the results are analyzed next. All tests were repeated until the standard deviation of the values was less than 1%. The results presented next are the average of these values within the 1% margin.",
"Test 1: The first test consists of running both dialogues INLINEFORM0 and INLINEFORM1 for only one user for sanity check. We set 30 seconds as the maximum time a simulated user should wait for a bot correct response before raising an error. The result is that all chatbots (Mediator, CDBExpert, and SavingsAccountExpert) responded all expected responses before the maximum time. Additionally, the framework collected how long each chatbot took to respond an expected answer.",
"In Figure FIGREF101 , we show the results for those time measurements for dialogue INLINEFORM0 , as for the dialogue INLINEFORM1 the results are approximately the same. The x-axis (Response Identifier) corresponds to the second column (Resp. Id) in Table UID98 . We can see, for example, that when the bot CDBExpert responds with the message 3 to the user utterance \"what is cdb?\", it is the only bot that takes time different than zero to answer, which is the expected behavior. We can also see that the Mediator bot is the one that takes the longest, as it is responsible to coordinate the other bots and the entire dialogue with the user. Moreover, when the expert bots (CDBExpert and SavingsAccountExpert) are called by the Mediator to respond to the simulation calculations (this happens in responses 6, 7, 11, 12, 16, 17, 21, 22, 26, 27), they take approximately the same to respond. Finally, we see that when the concluding responses to the simulation calculations are given by the Mediator (this happens in responses 9, 14, 19, 24, 29), the response times reaches the greatest values, being 20 seconds the greatest value in response 19. These results support the system developers to understand the behavior of the system when simulated users interact with it and then focus on specific messages that are taking longer.",
"Test 2: This test consists of running dialogue INLINEFORM0 , but now using eight concurrent simulated users. We set the maximum time to wait to 240 seconds, i.e., eight times the maximum set up for the single user in Test 1. The results are illustrated in Figure FIGREF102 , where we show the median time for the eight users. The maximum and minimum values are also presented with horizontal markers. Note that differently than what has been shown in Figure FIGREF101 , where each series represents one specific chatbot, in Figure FIGREF102 , the series represents the median response time for the responses in the order (x-axis) they are responded, regardless the chatbot.",
"Comparing the results in Figure FIGREF102 with the ones in Figure FIGREF101 , we can see that the bots take longer to respond when eight users are concurrently using the platform than when a single user uses it, as expected. For example, CDBExpert takes approximately 5 times longer to respond response 3 to eight users than to respond to one user. On average, the concluding responses to the simulation questions (i.e., responses 9, 14, 19, 24, 29) take approximately 7.3 times more to be responded with eight users than with one user, being the response 9 the one that presented greatest difference (11.4 times longer with eight users than with one). These results help the system developers to diagnose the scalability of the system architecture and to plan sizing and improvements."
],
[
"In this article, we explored the challenges of engineering MPCS and we have presented a hybrid conceptual architecture and its implementation with a finance advisory system.",
"We are currently evolving this architecture to be able to support decoupled interaction norms specification, and we are also developing a multi-party governance service that uses that specification to enforce exchange of compliant utterances.",
"In addition, we are exploring a micro-service implementation of SABIA in order to increase its scalability and performance, so thousands of members can join the system within thousands of conversations."
],
[
"The authors would like to thank Maximilien de Bayser, Ana Paula Appel, Flavio Figueiredo and Marisa Vasconcellos, who contributed with discussions during SABIA and CognIA's implementation."
]
],
"section_name": [
"Introduction",
"Challenges on Chattering",
"Conversational Systems",
"Types of Interactions",
"Types of Architectures",
"Types of Intentions",
"Types of Context Reasoning",
"Platforms",
"A Conceptual Architecture for Multiparty-Aware Chatbots",
"Workflow",
"Architecture Implementation and Evaluation",
"Speech-Act-based Intelligent Agents Framework",
"CognIA: A Cognitive Investment Advisor",
"Intention Classifier Accuracy",
"Testing SABIA",
"Conclusions and Future Work",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"1bb2c0cc6929f5e1ae91b7759eaa78940f413efa",
"212472f3204a53ed289ac9353f8ad63da5c12e9b"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 15: Evaluation of different classifiers in the first version of the training set"
],
"extractive_spans": [],
"free_form_answer": "precision, recall, F1 and accuracy",
"highlighted_evidence": [
"FLOAT SELECTED: Table 15: Evaluation of different classifiers in the first version of the training set"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In this section, we describe the validation framework that we created for integration tests. For this, we developed it as a new component of SABIA's system architecture and it provides a high level language which is able to specify interaction scenarios that simulate users interacting with the deployed chatbots. The system testers provide a set of utterances and their corresponding expected responses, and the framework automatically simulates users interacting with the bots and collect metrics, such as time taken to answer an utterance and other resource consumption metrics (e.g., memory, CPU, network bandwidth). Our goal was to: (i) provide a tool for integration tests, (ii) to validate CognIA's implementation, and (iii) to support the system developers in understanding the behavior of the system and which aspects can be improved. Thus, whenever developers modify the system's source code, the modifications must first pass the automatic test before actual deployment.",
"FLOAT SELECTED: Table 15: Evaluation of different classifiers in the first version of the training set"
],
"extractive_spans": [],
"free_form_answer": "Response time, resource consumption (memory, CPU, network bandwidth), precision, recall, F1, accuracy.",
"highlighted_evidence": [
"The system testers provide a set of utterances and their corresponding expected responses, and the framework automatically simulates users interacting with the bots and collect metrics, such as time taken to answer an utterance and other resource consumption metrics (e.g., memory, CPU, network bandwidth). ",
"FLOAT SELECTED: Table 15: Evaluation of different classifiers in the first version of the training set"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"0ea991696fd52323e53aea094e27ad503c231993",
"5009619c8d8d818cd06ea769d20a766f3c693349"
],
"answer": [
{
"evidence": [
"Given that there is no public dataset available with financial intents in Portuguese, we have employed the incremental approach to create our own training set for the Intent Classifier. First, we applied the Wizard of Oz method and from this study, we have collected a set of 124 questions that the users asked. Next, after these questions have been manually classified into a set of intent classes, and used to train the first version of the system, this set has been increased both in terms of number of classes and samples per class, resulting in a training set with 37 classes of intents, and a total 415 samples, with samples per class ranging from 3 to 37.",
"We have created domain-specific word vectors by considering a set 246,945 documents, corresponding to of 184,001 Twitter posts and and 62,949 news articles, all related to finance ."
],
"extractive_spans": [],
"free_form_answer": "Custom dataset with user questions; set of documents, twitter posts and news articles, all related to finance.",
"highlighted_evidence": [
"Given that there is no public dataset available with financial intents in Portuguese, we have employed the incremental approach to create our own training set for the Intent Classifier.",
"We have created domain-specific word vectors by considering a set 246,945 documents, corresponding to of 184,001 Twitter posts and and 62,949 news articles, all related to finance ."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Given that there is no public dataset available with financial intents in Portuguese, we have employed the incremental approach to create our own training set for the Intent Classifier. First, we applied the Wizard of Oz method and from this study, we have collected a set of 124 questions that the users asked. Next, after these questions have been manually classified into a set of intent classes, and used to train the first version of the system, this set has been increased both in terms of number of classes and samples per class, resulting in a training set with 37 classes of intents, and a total 415 samples, with samples per class ranging from 3 to 37."
],
"extractive_spans": [],
"free_form_answer": "a self-collected financial intents dataset in Portuguese",
"highlighted_evidence": [
"Given that there is no public dataset available with financial intents in Portuguese, we have employed the incremental approach to create our own training set for the Intent Classifier. First, we applied the Wizard of Oz method and from this study, we have collected a set of 124 questions that the users asked. Next, after these questions have been manually classified into a set of intent classes, and used to train the first version of the system, this set has been increased both in terms of number of classes and samples per class, resulting in a training set with 37 classes of intents, and a total 415 samples, with samples per class ranging from 3 to 37."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"f7ae5a683f6503fbae3f72af626be9d658d6df7d"
],
"answer": [
{
"evidence": [
"In this section we discuss the state of the art on conversational systems in three perspectives: types of interactions, types of architecture, and types of context reasoning. Then we present a table that consolidates and compares all of them.",
"ELIZA BIBREF11 was one of the first softwares created to understand natural language processing. Joseph Weizenbaum created it at the MIT in 1966 and it is well known for acting like a psychotherapist and it had only to reflect back onto patient's statements. ELIZA was created to tackle five \"fundamental technical problems\": the identification of critical words, the discovery of a minimal context, the choice of appropriate transformations, the generation of appropriate responses to the transformation or in the absence of critical words, and the provision of an ending capacity for ELIZA scripts.",
"Right after ELIZA came PARRY, developed by Kenneth Colby, who is psychiatrist at Stanford University in the early 1970s. The program was written using the MLISP language (meta-lisp) on the WAITS operating system running on a DEC PDP-10 and the code is non-portable. Parts of it were written in PDP-10 assembly code and others in MLISP. There may be other parts that require other language translators. PARRY was the first system to pass the Turing test - the psychiatrists were able to make the correct identification only 48 percent of the time, which is the same as a random guessing.",
"A.L.I.C.E. (Artificial Linguistic Internet Computer Entity) BIBREF12 appeared in 1995 but current version utilizes AIML, an XML language designed for creating stimulus-response chat robots BIBREF13 . A.L.I.C.E. bot has, at present, more than 40,000 categories of knowledge, whereas the original ELIZA had only about 200. The program is unable to pass the Turing test, as even the casual user will often expose its mechanistic aspects in short conversations.",
"Cleverbot (1997-2014) is a chatbot developed by the British AI scientist Rollo Carpenter. It passed the 2011 Turing Test at the Technique Techno-Management Festival held by the Indian Institute of Technology Guwahati. Volunteers participate in four-minute typed conversations with either Cleverbot or humans, with Cleverbot voted 59.3 per cent human, while the humans themselves were rated just 63.3 per cent human BIBREF14 ."
],
"extractive_spans": [
"ELIZA",
" PARRY",
"A.L.I.C.E.",
"Cleverbot"
],
"free_form_answer": "",
"highlighted_evidence": [
"In this section we discuss the state of the art on conversational systems in three perspectives: types of interactions, types of architecture, and types of context reasoning.",
"ELIZA BIBREF11 was one of the first softwares created to understand natural language processing. ",
"Right after ELIZA came PARRY, developed by Kenneth Colby, who is psychiatrist at Stanford University in the early 1970s.",
"A.L.I.C.E. (Artificial Linguistic Internet Computer Entity) BIBREF12 appeared in 1995 but current version utilizes AIML, an XML language designed for creating stimulus-response chat robots BIBREF13 .",
"Cleverbot (1997-2014) is a chatbot developed by the British AI scientist Rollo Carpenter. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
}
],
"nlp_background": [
"",
"",
""
],
"paper_read": [
"",
"",
""
],
"question": [
"What evaluation metrics did look at?",
"What datasets are used?",
"What is the state of the art described in the paper?"
],
"question_id": [
"cc608df2884e1e82679f663ed9d9d67a4b6c03f3",
"3e432d71512ffbd790a482c716e7079ee78ce732",
"dd76130ec5fac477123fe8880472d03fbafddef6"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"",
"",
""
]
} | {
"caption": [
"Table 2: User initiative",
"Table 4: MultiParty Conversation with Mixed initiative",
"Table 5: Categories of Classical Chatbots per Interaction and Intentions",
"Table 6: Categories of Classical Chatbots per Architectures and Context Reasoning",
"Table 8: Platforms for Building Chatbots",
"Figure 1: Chatbot Knowledge’s Conceptual Model in a MPCS",
"Figure 2: Workflow",
"Table 9: Main Generic Speech Acts Classes",
"Figure 3: Action Execution",
"Table 10: MultiParty Conversation with Mixed initiative",
"Table 11: Cognia Interaction Norms",
"Figure 4: Cognia Deployment View",
"Table 12: CognIA Specific Speech Acts Classes",
"Table 13: Action Classes Mapping",
"Table 14: Action Execution",
"Table 15: Evaluation of different classifiers in the first version of the training set",
"Table 17: Content of dialogue d1 (example of dialogue in CognIA)",
"Figure 5: Single simulated user results for dialogue d1.",
"Figure 6: Eight concurrent simulated users."
],
"file": [
"4-Table2-1.png",
"5-Table4-1.png",
"6-Table5-1.png",
"6-Table6-1.png",
"15-Table8-1.png",
"16-Figure1-1.png",
"18-Figure2-1.png",
"21-Table9-1.png",
"22-Figure3-1.png",
"24-Table10-1.png",
"25-Table11-1.png",
"27-Figure4-1.png",
"29-Table12-1.png",
"31-Table13-1.png",
"32-Table14-1.png",
"32-Table15-1.png",
"33-Table17-1.png",
"36-Figure5-1.png",
"36-Figure6-1.png"
]
} | [
"What evaluation metrics did look at?",
"What datasets are used?"
] | [
[
"1705.01214-Testing SABIA-0",
"1705.01214-32-Table15-1.png"
],
[
"1705.01214-CognIA: A Cognitive Investment Advisor-62",
"1705.01214-CognIA: A Cognitive Investment Advisor-64"
]
] | [
"Response time, resource consumption (memory, CPU, network bandwidth), precision, recall, F1, accuracy.",
"a self-collected financial intents dataset in Portuguese"
] | 182 |
1908.07195 | ARAML: A Stable Adversarial Training Framework for Text Generation | Most of the existing generative adversarial networks (GAN) for text generation suffer from the instability of reinforcement learning training algorithms such as policy gradient, leading to unstable performance. To tackle this problem, we propose a novel framework called Adversarial Reward Augmented Maximum Likelihood (ARAML). During adversarial training, the discriminator assigns rewards to samples which are acquired from a stationary distribution near the data rather than the generator's distribution. The generator is optimized with maximum likelihood estimation augmented by the discriminator's rewards instead of policy gradient. Experiments show that our model can outperform state-of-the-art text GANs with a more stable training process. | {
"paragraphs": [
[
"Natural text generation, as a key task in NLP, has been advanced substantially thanks to the flourish of neural models BIBREF0 , BIBREF1 . Typical frameworks such as sequence-to-sequence (seq2seq) have been applied to various generation tasks, including machine translation BIBREF2 and dialogue generation BIBREF3 . The standard paradigm to train such neural models is maximum likelihood estimation (MLE), which maximizes the log-likelihood of observing each word in the text given the ground-truth proceeding context BIBREF4 .",
"Although widely used, MLE suffers from the exposure bias problem BIBREF5 , BIBREF6 : during test, the model sequentially predicts the next word conditioned on its previous generated words while during training conditioned on ground-truth words. To tackle this problem, generative adversarial networks (GAN) with reinforcement learning (RL) training approaches have been introduced to text generation tasks BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 , where the discriminator is trained to distinguish real and generated text samples to provide reward signals for the generator, and the generator is optimized via policy gradient BIBREF7 .",
"However, recent studies have shown that potential issues of training GANs on discrete data are more severe than exposure bias BIBREF14 , BIBREF15 . One of the fundamental issues when generating discrete text samples with GANs is training instability. Updating the generator with policy gradient always leads to an unstable training process because it's difficult for the generator to derive positive and stable reward signals from the discriminator even with careful pre-training BIBREF8 . As a result, the generator gets lost due to the high variance of reward signals and the training process may finally collapse BIBREF16 .",
"In this paper, we propose a novel adversarial training framework called Adversarial Reward Augmented Maximum Likelihood (ARAML) to deal with the instability issue of training GANs for text generation. At each iteration of adversarial training, we first train the discriminator to assign higher rewards to real data than to generated samples. Then, inspired by reward augmented maximum likelihood (RAML) BIBREF17 , the generator is updated on the samples acquired from a stationary distribution with maximum likelihood estimation (MLE), weighted by the discriminator's rewards. This stationary distribution is designed to guarantee that training samples are surrounding the real data, thus the exploration space of our generator is indeed restricted by the MLE training objective, resulting in more stable training. Compared to other text GANs with RL training techniques, our framework acquires samples from the stationary distribution rather than the generator's distribution, and uses RAML training paradigm to optimize the generator instead of policy gradient. Our contributions are mainly as follows:"
],
[
"Recently, text generation has been widely studied with neural models trained with maximum likelihood estimation BIBREF4 . However, MLE tends to generate universal text BIBREF18 . Various methods have been proposed to enhance the generation quality by refining the objective function BIBREF18 , BIBREF19 or modifying the generation distribution with external information like topic BIBREF20 , sentence type BIBREF21 , emotion BIBREF22 and knowledge BIBREF23 .",
"As mentioned above, MLE suffers from the exposure bias problem BIBREF5 , BIBREF6 . Thus, reinforcement learning has been introduced to text generation tasks such as policy gradient BIBREF6 and actor-critic BIBREF24 . BIBREF17 proposed an efficient and stable approach called Reward Augmented Maximum Likelihood (RAML), which connects the log-likelihood and expected rewards to incorporate MLE training objective into RL framework.",
"Since some text generation tasks have no explicit metrics to be directly optimized, adversarial training has been applied to generating discrete text samples with a discriminator to learn a proper reward. For instance, SeqGAN BIBREF7 devised a discriminator to distinguish the real data and generated samples, and a generator to maximize the reward from the discriminator via policy gradient. Other variants of GANs have been proposed to improve the generator or the discriminator. To improve the generator, MaliGAN BIBREF8 developed a normalized maximum likelihood optimization target for the generator to stably model the discrete sequences. LeakGAN BIBREF11 guided the generator with reward signals leaked from the discriminator at all generation steps to deal with long text generation task. MaskGAN BIBREF10 employed an actor-critic architecture to make the generator fill in missing text conditioned on the surrounding context, which is expected to mitigate the problem of mode collapse. As for the discriminator, RankGAN BIBREF9 replaced traditional discriminator with a ranker to learn the relative ranking information between the real texts and generated ones. Inverse reinforcement learning BIBREF12 used a trainable reward approximator as the discriminator to provide dense reward signals at each generation step. DPGAN BIBREF13 introduced a language model based discriminator and regarded cross-entropy as rewards to promote the diversity of generation results.",
"The most similar works to our model are RAML BIBREF17 and MaliGAN BIBREF8 : 1) Compared with RAML, our model adds a discriminator to learn the reward signals instead of choosing existing metrics as rewards. We believe that our model can adapt to various text generation tasks, particularly those without explicit evaluation metrics. 2) Unlike MaliGAN, we acquire samples from a fixed distribution near the real data rather than the generator's distribution, which is expected to make the training process more stable."
],
[
"Text generation can be formulated as follows: given the real data distribution INLINEFORM0 , the task is to train a generative model INLINEFORM1 where INLINEFORM2 can fit INLINEFORM3 well. In this formulation, INLINEFORM4 and INLINEFORM5 denotes a word in the vocabulary INLINEFORM6 .",
"Figure FIGREF3 shows the overview of our model ARAML. This adversarial training framework consists of two phases: 1) The discriminator is trained to assign higher rewards to real data than to generated data. 2) The generator is trained on the samples acquired from a stationary distribution with reward augmented MLE training objective. This training paradigm of the generator indeed constrains the search space with the MLE training objective, which alleviates the issue of unstable training."
],
[
"The discriminator INLINEFORM0 aims to distinguish real data and generated data like other GANs. Inspired by Least-Square GAN BIBREF25 , we devise the loss function as follows: DISPLAYFORM0 ",
"This loss function forces the discriminator to assign higher rewards to real data than to generated data, so the discriminator can learn to provide more proper rewards as the training proceeds."
],
[
"The training objective of our generator INLINEFORM0 is derived from the objective of other discrete GANs with RL training method: DISPLAYFORM0 ",
" where INLINEFORM0 denotes the rewards from the discriminator INLINEFORM1 and the entropy regularized term INLINEFORM2 encourages INLINEFORM3 to generate diverse text samples. INLINEFORM4 is a temperature hyper-parameter to balance these two terms.",
"As mentioned above, discrete GANs suffer from the instability issue due to policy gradient, thus they are consequently difficult to train. Inspired by RAML BIBREF17 , we introduce an exponential payoff distribution INLINEFORM0 to connect RL loss with RAML loss: DISPLAYFORM0 ",
" where INLINEFORM0 . Thus, we can rewrite INLINEFORM1 with INLINEFORM2 and INLINEFORM3 as follows: DISPLAYFORM0 ",
" Following RAML, we remove the constant term and optimize the KL divergence in the opposite direction: DISPLAYFORM0 ",
" where INLINEFORM0 is a constant in the training phase of the generator. It has been proved that INLINEFORM1 and INLINEFORM2 are equivalent up to their first order Taylor approximations, and they have the same global optimum BIBREF17 . INLINEFORM3 can be trained in a MLE-like fashion but sampling from the distribution INLINEFORM4 is intractable in the adversarial setting, because INLINEFORM5 varies with the discriminator INLINEFORM6 . Thus, we introduce importance sampling to separate sampling process from INLINEFORM7 and obtain the final loss function: DISPLAYFORM0 ",
" where INLINEFORM0 denotes a stationary distribution and INLINEFORM1 . To optimize this loss function, we first construct the fixed distribution INLINEFORM2 to get samples, and devise the proper reward function INLINEFORM3 to train the generator in a stable and effective way.",
"We construct the distribution INLINEFORM0 based on INLINEFORM1 : DISPLAYFORM0 ",
" In this way, INLINEFORM0 can be designed to guarantee that INLINEFORM1 is near INLINEFORM2 , leading to a more stable training process. To obtain a new sample INLINEFORM3 from a real data sample INLINEFORM4 , we can design three steps which contain sampling an edit distance INLINEFORM5 , the positions INLINEFORM6 for substitution and the new words INLINEFORM7 filled into the corresponding positions. Thus, INLINEFORM8 can be decomposed into three terms: DISPLAYFORM0 ",
"The first step is to sample an edit distance based on a real data sample INLINEFORM0 , where INLINEFORM1 is a sequence of length INLINEFORM2 . The number of sentences which have the edit distance INLINEFORM3 to some input sentence can be computed approximately as below: DISPLAYFORM0 ",
"where INLINEFORM0 denotes the number of sentences which have an edit distance INLINEFORM1 to a sentence of length INLINEFORM2 , and INLINEFORM3 indicates the size of vocabulary. We then follow BIBREF17 to re-scale the counts by INLINEFORM4 and do normalization, so that we can sample an edit distance INLINEFORM5 from: DISPLAYFORM0 ",
"where INLINEFORM0 , as a temperature hyper-parameter, restricts the search space surrounding the original sentence. Larger INLINEFORM1 brings more samples with long edit distances.",
"The next step is to select positions for substitution based on the sampled edit distance INLINEFORM0 . Intuitively, we can randomly choose INLINEFORM1 distinct positions in INLINEFORM2 to be replaced by new words. The probability of choosing the position INLINEFORM3 is calculated as follows: DISPLAYFORM0 ",
"Following this sampling strategy, we can obtain the position set INLINEFORM0 . This strategy approximately guarantees that the edit distance between a new sentence and the original sentence is INLINEFORM1 .",
"At the final step, our model determines new words for substitution at each sampled position INLINEFORM0 . We can formulate this sampling process from the original sequence INLINEFORM1 to a new sample INLINEFORM2 as a sequential transition INLINEFORM3 . At each step from INLINEFORM4 to INLINEFORM5 INLINEFORM6 , we first sample a new word INLINEFORM7 from the distribution INLINEFORM8 , then replace the old word at position INLINEFORM9 of INLINEFORM10 to obtain INLINEFORM11 . The whole sampling process can be decomposed as follows: DISPLAYFORM0 ",
" There are two common sampling strategies to model INLINEFORM0 , i.e. random sampling and constrained sampling. Random sampling strategy samples a new word INLINEFORM1 according to the uniform distribution over the vocabulary INLINEFORM2 BIBREF17 , while constrained sampling strategy samples INLINEFORM3 to maximize the language model score of the target sentence INLINEFORM4 BIBREF26 , BIBREF27 . Here, we adopt constrained sampling in our model and compare the performances of two strategies in the experiment.",
"We devise the reward function INLINEFORM0 according to the discriminator's output INLINEFORM1 and the stationary distribution INLINEFORM2 : DISPLAYFORM0 ",
" Intuitively, this reward function encourages the generator to generate sentences with large sampling probability and high rewards from the discriminator. Thus, the weight of samples INLINEFORM0 can be calculated as follows: DISPLAYFORM0 ",
" So far, we can successfully optimize the generator's loss INLINEFORM0 via Equation EQREF12 . This training paradigm makes our generator avoid possible variances caused by policy gradient and get more stable reward signals from the discriminator, because our generator is restricted to explore the training samples near the real data.",
"[htb] Adversarial Reward Augmented Maximum Likelihood [1] ",
"Total adversarial training iterations: INLINEFORM0 ",
"Steps of training generator: INLINEFORM0 ",
"Steps of training discriminator: INLINEFORM0 ",
"Pre-train the generator INLINEFORM0 with MLE loss Generate samples from INLINEFORM1 Pre-train the discriminator INLINEFORM2 via Eq.( EQREF6 ) Construct INLINEFORM3 via Eq.( EQREF14 ) - Eq.( EQREF19 ) each INLINEFORM4 each INLINEFORM5 Update INLINEFORM6 via Eq.( EQREF12 ) each INLINEFORM7 Update INLINEFORM8 via Eq.( EQREF6 )"
],
[
"We have shown our adversarial training framework for text generation tasks without an input. Actually, it can also be extended to conditional text generation tasks like dialogue generation. Given the data distribution INLINEFORM0 where INLINEFORM1 denote contexts and responses respectively, the objective function of ARAML's generator can be modified as below: DISPLAYFORM0 ",
" where INLINEFORM0 and INLINEFORM1 is trained to distinguish whether INLINEFORM2 is the true response to INLINEFORM3 ."
],
[
"The most similar works to our framework are RAML BIBREF17 and MaliGAN BIBREF8 . The main difference among them is the training objective of their generators. We have shown different objective functions in Table TABREF26 . For comparison, we use the form with no input for all the three models.",
"Our model is greatly inspired by RAML, which gets samples from a non-parametric distribution INLINEFORM0 constructed based on a specific reward. Compared to RAML, our reward comes from a learnable discriminator which varies as the adversarial training proceeds rather than a specific reward function. This difference equips our framework with the ability to adapt to the text generation tasks with no explicit evaluation metrics as rewards.",
"Our model is also similar to MaliGAN, which gets samples from the generator's distribution. In MaliGAN's training objective, INLINEFORM0 also indicates the generator's distribution but it's used in the sampling phase and fixed at each optimization step. The weight of samples INLINEFORM1 . Different from our model, MaliGAN acquires samples from the generator's distribution INLINEFORM2 , which usually brings samples with low rewards even with careful pre-training for the generator, leading to training instability. Instead, our framework gets samples from a stationary distribution INLINEFORM3 around real data, thus our training process is more stable."
],
[
"We evaluated ARAML on three datasets: COCO image caption dataset BIBREF28 , EMNLP2017 WMT dataset and WeiboDial single-turn dialogue dataset BIBREF29 . COCO and EMNLP2017 WMT are the common benchmarks with no input to evaluate the performance of discrete GANs, and we followed the existing works to preprocess these datasets BIBREF12 , BIBREF11 . WeiboDial, as a dialogue dataset, was applied to test the performance of our model with input trigger. We simply removed post-response pairs containing low-frequency words and randomly selected a subset for our training/test set. The statistics of three datasets are presented in Table TABREF28 ."
],
[
"We compared our model with MLE, RL and GAN baselines. Since COCO and EMNLP2017 WMT don't have input while WeiboDial regards posts as input, we chose the following baselines respectively:",
"MLE: a RNN model trained with MLE objective BIBREF4 . Its extension, Seq2Seq, can work on the dialogue dataset BIBREF2 .",
"SeqGAN: The first text GAN model that updates the generator with policy gradient based on the rewards from the discriminator BIBREF7 .",
"LeakGAN: A variant of SeqGAN that provides rewards based on the leaked information of the discriminator for the generator BIBREF11 .",
"MaliGAN: A variant of SeqGAN that optimizes the generator with a normalized maximum likelihood objective BIBREF8 .",
"IRL: This inverse reinforcement learning method replaces the discriminator with a reward approximator to provide dense rewards BIBREF12 .",
"RAML: A RL approach to incorporate MLE objective into RL training framework, which regards BLEU as rewards BIBREF17 .",
"DialogGAN: An extension of SeqGAN tuned to dialogue generation task with MLE objective added to the adversarial objective BIBREF16 .",
"DPGAN: A variant of DialogGAN which uses a language model based discriminator and regards cross-entropy as rewards BIBREF13 .",
"Note that MLE, SeqGAN, LeakGAN, MaliGAN and IRL are the baselines on COCO and EMNLP2017 WMT, while MLE, RAML, DialogGAN, and DPGAN on WeiboDial. The original codes are used to test the baselines."
],
[
"The implementation details of our model are shown in Table TABREF31 . For COCO / EMNLP2017, the generator is a LSTM unit BIBREF30 with 128 cells, and the discriminator is implemented based on BIBREF7 . For WeiboDial, the generator is an encoder-decoder structure with attention mechanism, where both the encoder and the decoder consist of a two-layer GRU BIBREF31 with 128 cells. The discriminator is implemented based on BIBREF32 . The language model used in the constrained sampling of ARAML is implemented in the same setting as the generators, and is pre-trained on the training set of each dataset. The codes and the datasets are available at https://github.com/kepei1106/ARAML.",
"As for the details of the baselines, the generators of all the baselines except LeakGAN are the same as ours. Note that the generator of LeakGAN consists of a hierarchical LSTM unit, thus we followed the implementation in the original paper. In terms of the differences, the discriminators of GAN baselines are implemented based on the original papers. Other hyper-parameters of baselines including batch size, learning rate, and pre-training epochs, were set based on the original codes, because the convergence of baselines is sensitive to these hyper-parameters."
],
[
"We adopted forward/reverse perplexity BIBREF33 and Self-BLEU BIBREF34 to evaluate the quality of generated texts. Forward perplexity (PPL-F) indicates the perplexity on the generated data provided by a language model trained on real data to measure the fluency of generated samples. Reverse perplexity (PPL-R) switches the roles of generated data and real data to reflect the discrepancy between the generated distribution and the data distribution. Self-BLEU (S-BLEU) regards each sentence in the generated collection as hypothesis and the others as reference to obtain BLEU scores, which evaluates the diversity of generated results.",
"Results are shown in Table TABREF33 . LeakGAN performs best on forward perplexity because it can generate more fluent samples. As for reverse perplexity, our model ARAML beats other baselines, showing that our model can fit the data distribution better. Other GANs, particularly LeakGAN, obtain high reverse perplexity due to mode collapse BIBREF12 , thus they only capture limited fluent expressions, resulting in large discrepancy between the generated distribution and data distribution. ARAML also outperforms the baselines in terms of Self-BLEU, indicating that our model doesn't fall into mode collapse with the help of the MLE training objective and has the ability to generate more diverse sentences.",
"We also provide standard deviation of each metric in Table TABREF33 , reflecting the stability of each model's performance. Our model ARAML nearly achieves the smallest standard deviation in all the metrics, indicating that our framework outperforms policy gradient in the stability of adversarial training."
],
[
"Dialogue evaluation is an open problem and existing works have found that automatic metrics have low correlation to human evaluation BIBREF35 , BIBREF36 , BIBREF37 . Thus, we resorted to manual evaluation to assess the generation quality on WeiboDial. We randomly sampled 200 posts from the test set and collected the generated results from all the models. For each pair of responses (one from ARAML and the other from a baseline, given the same input post), five annotators were hired to label which response is better (i.e. win, lose or tie) in terms of grammaticality (whether a response itself is grammatical and logical) and relevance (whether a response is appropriate and relevant to the post). The two metrics were evaluated independently.",
"The evaluation results are shown in Table TABREF35 . To measure the inter-annotator agreement, we calculated Fleiss' kappa BIBREF38 for each pair-wise comparison where results show moderate agreement ( INLINEFORM0 ). We also conducted sign test to check the significance of the differences.",
"As shown in Table TABREF35 , ARAML performs significantly better than other baselines in all the cases. This result indicates that the samples surrounding true responses provide stable rewards for the generator, and stable RAML training paradigm significantly enhances the performance in both metrics."
],
[
"To verify the training stability, we conducted experiments on COCO many times and chose the best 5 trials for SeqGAN, LeakGAN, IRL, MaliGAN and ARAML, respectively. Then, we presented the forward/reverse perplexity in the training process in Figure FIGREF38 . We can see that our model with smaller standard deviation is more stable than other GAN baselines in both metrics. Although LeakGAN reaches the best forward perplexity, its standard deviation is extremely large and it performs badly in reverse perplexity, indicating that it generates limited expressions that are grammatical yet divergent from the data distribution."
],
[
"The temperature INLINEFORM0 controls the search space surrounding the real data as we analyze in Section UID13 . To investigate its impact on the performance of our model, we fixed all the other hyper-parameters and test ARAML with different temperatures on COCO.",
"The experimental results are shown in Figure FIGREF41 . We can see that as the temperature becomes larger, forward perplexity increases gradually while Self-BLEU decreases. As mentioned in Section UID13 , large temperatures encourage our generator to explore the samples that are distant from real data distribution, thus the diversity of generated results will be improved. However, these samples distant from the data distribution are more likely to be poor in fluency, leading to worse forward perplexity. Reverse perplexity is influenced by both generation quality and diversity, so the correlation between temperature and reverse perplexity is not intuitive. We can observe that the model with INLINEFORM0 reaches the best reverse perplexity.",
"We have mentioned two common sampling strategies in Section UID13 , i.e. random sampling and constrained sampling. To analyze their impact, we keep all the model structures and hyper-parameters fixed and test ARAML with these two strategies on COCO.",
"Table TABREF45 shows the results. It's obvious that random sampling hurts the model performance except Self-BLEU-1, because it indeed allows low-quality samples available to the generator. Exploring these samples degrades the quality and diversity of generated results. Despite the worse performance on automatic metrics, random sampling doesn't affect the training stability of our framework. The standard deviation of ARAML-R is still smaller than other GAN baselines."
],
[
"Table TABREF47 presents the examples generated by the models on COCO. We can find that other baselines suffer from grammatical errors (e.g. “in front of flying her kite\" from MLE), repetitive expressions (e.g. “A group of people\" from IRL) and incoherent statements (e.g. “A group of people sitting on a cell phone” from IRL). By contrast, our model performs well in these sentences and has the ability to generate grammatical and coherent results.",
"Table TABREF48 shows the generated examples on WeiboDial. It's obvious that other baselines don't capture the topic word “late\" in the post, thus generate irrelevant responses. ARAML can provide a response that is grammatical and closely relevant to the post."
],
[
"We propose a novel adversarial training framework to deal with the instability problem of current GANs for text generation. To address the instability issue caused by policy gradient, we incorporate RAML into the advesarial training paradigm to make our generator acquire stable rewards. Experiments show that our model performs better than several state-of-the-art GAN baselines with lower training variance, yet producing better performance on three text generation tasks."
],
[
"This work was supported by the National Science Foundation of China (Grant No. 61936010/61876096) and the National Key R&D Program of China (Grant No. 2018YFC0830200). We would like to thank THUNUS NExT Joint-Lab for the support."
]
],
"section_name": [
"Introduction",
"Related Work",
"Task Definition and Model Overview",
"Discriminator",
"Generator",
"Extension to Conditional Text Generation",
"Comparison with RAML and MaliGAN",
"Datasets",
"Baselines",
"Implementation Details",
"Language Generation on COCO and EMNLP2017 WMT",
"Dialogue Generation on WeiboDial",
"Further Analysis on Stability",
"Ablation Study",
"Case Study",
"Conclusion",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"9121b87725a3203a426ec3615c826080c38ea46c",
"4302032b9501efddf6fcfeb44bf9b3a703260b4a"
],
"answer": [
{
"evidence": [
"We compared our model with MLE, RL and GAN baselines. Since COCO and EMNLP2017 WMT don't have input while WeiboDial regards posts as input, we chose the following baselines respectively:",
"MLE: a RNN model trained with MLE objective BIBREF4 . Its extension, Seq2Seq, can work on the dialogue dataset BIBREF2 .",
"SeqGAN: The first text GAN model that updates the generator with policy gradient based on the rewards from the discriminator BIBREF7 .",
"LeakGAN: A variant of SeqGAN that provides rewards based on the leaked information of the discriminator for the generator BIBREF11 .",
"MaliGAN: A variant of SeqGAN that optimizes the generator with a normalized maximum likelihood objective BIBREF8 .",
"IRL: This inverse reinforcement learning method replaces the discriminator with a reward approximator to provide dense rewards BIBREF12 .",
"RAML: A RL approach to incorporate MLE objective into RL training framework, which regards BLEU as rewards BIBREF17 .",
"DialogGAN: An extension of SeqGAN tuned to dialogue generation task with MLE objective added to the adversarial objective BIBREF16 .",
"DPGAN: A variant of DialogGAN which uses a language model based discriminator and regards cross-entropy as rewards BIBREF13 ."
],
"extractive_spans": [
"MLE",
"SeqGAN",
"LeakGAN",
"MaliGAN",
"IRL",
"RAML",
"DialogGAN",
"DPGAN"
],
"free_form_answer": "",
"highlighted_evidence": [
"We compared our model with MLE, RL and GAN baselines. Since COCO and EMNLP2017 WMT don't have input while WeiboDial regards posts as input, we chose the following baselines respectively:\n\nMLE: a RNN model trained with MLE objective BIBREF4 . Its extension, Seq2Seq, can work on the dialogue dataset BIBREF2 .\n\nSeqGAN: The first text GAN model that updates the generator with policy gradient based on the rewards from the discriminator BIBREF7 .\n\nLeakGAN: A variant of SeqGAN that provides rewards based on the leaked information of the discriminator for the generator BIBREF11 .\n\nMaliGAN: A variant of SeqGAN that optimizes the generator with a normalized maximum likelihood objective BIBREF8 .\n\nIRL: This inverse reinforcement learning method replaces the discriminator with a reward approximator to provide dense rewards BIBREF12 .\n\nRAML: A RL approach to incorporate MLE objective into RL training framework, which regards BLEU as rewards BIBREF17 .\n\nDialogGAN: An extension of SeqGAN tuned to dialogue generation task with MLE objective added to the adversarial objective BIBREF16 .\n\nDPGAN: A variant of DialogGAN which uses a language model based discriminator and regards cross-entropy as rewards BIBREF13 ."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We compared our model with MLE, RL and GAN baselines. Since COCO and EMNLP2017 WMT don't have input while WeiboDial regards posts as input, we chose the following baselines respectively:",
"SeqGAN: The first text GAN model that updates the generator with policy gradient based on the rewards from the discriminator BIBREF7 .",
"LeakGAN: A variant of SeqGAN that provides rewards based on the leaked information of the discriminator for the generator BIBREF11 .",
"MaliGAN: A variant of SeqGAN that optimizes the generator with a normalized maximum likelihood objective BIBREF8 .",
"DialogGAN: An extension of SeqGAN tuned to dialogue generation task with MLE objective added to the adversarial objective BIBREF16 .",
"DPGAN: A variant of DialogGAN which uses a language model based discriminator and regards cross-entropy as rewards BIBREF13 ."
],
"extractive_spans": [
"SeqGAN",
"LeakGAN",
"MaliGAN",
"DialogGAN",
"DPGAN"
],
"free_form_answer": "",
"highlighted_evidence": [
"Since COCO and EMNLP2017 WMT don't have input while WeiboDial regards posts as input, we chose the following baselines respectively:",
"SeqGAN: The first text GAN model that updates the generator with policy gradient based on the rewards from the discriminator BIBREF7 .",
"LeakGAN: A variant of SeqGAN that provides rewards based on the leaked information of the discriminator for the generator BIBREF11 .",
"MaliGAN: A variant of SeqGAN that optimizes the generator with a normalized maximum likelihood objective BIBREF8 .",
"DialogGAN: An extension of SeqGAN tuned to dialogue generation task with MLE objective added to the adversarial objective BIBREF16 .",
"DPGAN: A variant of DialogGAN which uses a language model based discriminator and regards cross-entropy as rewards BIBREF13 ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"4ba0238968874db3a91b2b3be869fd785520e341",
"605ef5be4a83a894ca7ea46d7ebce30351cbca71"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 4: Automatic evaluation on COCO and EMNLP2017 WMT. Each metric is presented with mean and standard deviation."
],
"extractive_spans": [],
"free_form_answer": "ARAM has achieved improvement over all baseline methods using reverese perplexity and slef-BLEU metric. The maximum reverse perplexity improvement 936,16 is gained for EMNLP2017 WMT dataset and 48,44 for COCO dataset.",
"highlighted_evidence": [
"FLOAT SELECTED: Table 4: Automatic evaluation on COCO and EMNLP2017 WMT. Each metric is presented with mean and standard deviation."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 4: Automatic evaluation on COCO and EMNLP2017 WMT. Each metric is presented with mean and standard deviation.",
"FLOAT SELECTED: Table 5: Human evaluation on WeiboDial. The scores represent the percentages of Win, Lose or Tie when our model is compared with a baseline. κ denotes Fleiss’ kappa (all are moderate agreement). The scores marked with * mean p-value< 0.05 and ** indicates p-value< 0.01 in sign test."
],
"extractive_spans": [],
"free_form_answer": "Compared to the baselines, ARAML does not do better in terms of perplexity on COCO and EMNLP 2017 WMT datasets, but it does by up to 0.27 Self-BLEU points on COCO and 0.35 Self-BLEU on EMNLP 2017 WMT. In terms of Grammaticality and Relevance, it scores better than the baselines on up to 75.5% and 73% of the cases respectively.",
"highlighted_evidence": [
"FLOAT SELECTED: Table 4: Automatic evaluation on COCO and EMNLP2017 WMT. Each metric is presented with mean and standard deviation.",
"FLOAT SELECTED: Table 5: Human evaluation on WeiboDial. The scores represent the percentages of Win, Lose or Tie when our model is compared with a baseline. κ denotes Fleiss’ kappa (all are moderate agreement). The scores marked with * mean p-value< 0.05 and ** indicates p-value< 0.01 in sign test."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"3c0d42931aaae53acefbee56b67ca230244422b4"
]
},
{
"annotation_id": [
"0ebc6ca5161ad73f8982fc32879aae2a03b759b3"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"What GAN models were used as baselines to compare against?",
"How much improvement is gained from Adversarial Reward Augmented Maximum Likelihood (ARAML)?",
"Is the discriminator's reward made available at each step to the generator?"
],
"question_id": [
"43eecc576348411b0634611c81589f618cd4fddf",
"79f9468e011670993fd162543d1a4b3dd811ac5d",
"c262d3d1c5a8b6fef6b594d5eee86bc2b09e3baf"
],
"question_writer": [
"fa716cd87ce6fd6905e2f23f09b262e90413167f",
"fa716cd87ce6fd6905e2f23f09b262e90413167f",
"fa716cd87ce6fd6905e2f23f09b262e90413167f"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 1: Overview of ARAML. The training samples are acquired from a stationary distribution Ps based on the real data. The generator is then trained on the samples augmented by the discriminator’s rewards. The discriminator is trained to distinguish real data and generated data.",
"Table 2: Statistics of COCO, EMNLP2017 WMT and WeiboDial. The average lengths 7.3/10.8 of WeiboDial indicate the lengths of posts and responses, respectively.",
"Table 1: Training objectives of generators for RAML, MaliGAN and ARAML.",
"Table 3: Implementation details of ARAML. G/D/LM indicates the generator / discriminator / language model used in constrained sampling, respectively.",
"Table 4: Automatic evaluation on COCO and EMNLP2017 WMT. Each metric is presented with mean and standard deviation.",
"Figure 2: PPL-F/PPL-R curves of ARAML, SeqGAN, LeakGAN, MaliGAN and IRL in the training process. The shade area indicates the standard deviation at each data point. The dotted vertical lines separate pre-training and adversarial training phases (50 for ARAML, IRL and MaliGAN, 80 for SeqGAN and LeakGAN).",
"Table 5: Human evaluation on WeiboDial. The scores represent the percentages of Win, Lose or Tie when our model is compared with a baseline. κ denotes Fleiss’ kappa (all are moderate agreement). The scores marked with * mean p-value< 0.05 and ** indicates p-value< 0.01 in sign test.",
"Table 6: PPL-F, PPL-R and S-BLEU of ARAML with random sampling (ARAML-R) and constrained sampling (ARAML-C) on COCO.",
"Table 7: Examples of generated sentences on COCO. Grammatical errors are in red, while blue text represents repetitive expressions and green part indicates incoherent statements.",
"Table 8: Examples of generated responses on WeiboDial."
],
"file": [
"3-Figure1-1.png",
"5-Table2-1.png",
"5-Table1-1.png",
"6-Table3-1.png",
"7-Table4-1.png",
"7-Figure2-1.png",
"8-Table5-1.png",
"8-Table6-1.png",
"9-Table7-1.png",
"9-Table8-1.png"
]
} | [
"How much improvement is gained from Adversarial Reward Augmented Maximum Likelihood (ARAML)?"
] | [
[
"1908.07195-7-Table4-1.png",
"1908.07195-8-Table5-1.png"
]
] | [
"Compared to the baselines, ARAML does not do better in terms of perplexity on COCO and EMNLP 2017 WMT datasets, but it does by up to 0.27 Self-BLEU points on COCO and 0.35 Self-BLEU on EMNLP 2017 WMT. In terms of Grammaticality and Relevance, it scores better than the baselines on up to 75.5% and 73% of the cases respectively."
] | 183 |
1903.11437 | Using Monolingual Data in Neural Machine Translation: a Systematic Study | Neural Machine Translation (MT) has radically changed the way systems are developed. A major difference with the previous generation (Phrase-Based MT) is the way monolingual target data, which often abounds, is used in these two paradigms. While Phrase-Based MT can seamlessly integrate very large language models trained on billions of sentences, the best option for Neural MT developers seems to be the generation of artificial parallel data through \textsl{back-translation} - a technique that fails to fully take advantage of existing datasets. In this paper, we conduct a systematic study of back-translation, comparing alternative uses of monolingual data, as well as multiple data generation procedures. Our findings confirm that back-translation is very effective and give new explanations as to why this is the case. We also introduce new data simulation techniques that are almost as effective, yet much cheaper to implement. | {
"paragraphs": [
[
"The new generation of Neural Machine Translation (NMT) systems is known to be extremely data hungry BIBREF0 . Yet, most existing NMT training pipelines fail to fully take advantage of the very large volume of monolingual source and/or parallel data that is often available. Making a better use of data is particularly critical in domain adaptation scenarios, where parallel adaptation data is usually assumed to be small in comparison to out-of-domain parallel data, or to in-domain monolingual texts. This situation sharply contrasts with the previous generation of statistical MT engines BIBREF1 , which could seamlessly integrate very large amounts of non-parallel documents, usually with a large positive effect on translation quality.",
"Such observations have been made repeatedly and have led to many innovative techniques to integrate monolingual data in NMT, that we review shortly. The most successful approach to date is the proposal of BIBREF2 , who use monolingual target texts to generate artificial parallel data via backward translation (BT). This technique has since proven effective in many subsequent studies. It is however very computationally costly, typically requiring to translate large sets of data. Determining the “right” amount (and quality) of BT data is another open issue, but we observe that experiments reported in the literature only use a subset of the available monolingual resources. This suggests that standard recipes for BT might be sub-optimal.",
"This paper aims to better understand the strengths and weaknesses of BT and to design more principled techniques to improve its effects. More specifically, we seek to answer the following questions: since there are many ways to generate pseudo parallel corpora, how important is the quality of this data for MT performance? Which properties of back-translated sentences actually matter for MT quality? Does BT act as some kind of regularizer BIBREF3 ? Can BT be efficiently simulated? Does BT data play the same role as a target-side language modeling, or are they complementary? BT is often used for domain adaptation: can the effect of having more in-domain data be sorted out from the mere increase of training material BIBREF2 ? For studies related to the impact of varying the size of BT data, we refer the readers to the recent work of BIBREF4 .",
"To answer these questions, we have reimplemented several strategies to use monolingual data in NMT and have run experiments on two language pairs in a very controlled setting (see § SECREF2 ). Our main results (see § SECREF4 and § SECREF5 ) suggest promising directions for efficient domain adaptation with cheaper techniques than conventional BT."
],
[
"We are mostly interested with the following training scenario: a large out-of-domain parallel corpus, and limited monolingual in-domain data. We focus here on the Europarl domain, for which we have ample data in several languages, and use as in-domain training data the Europarl corpus BIBREF5 for two translation directions: English INLINEFORM0 German and English INLINEFORM1 French. As we study the benefits of monolingual data, most of our experiments only use the target side of this corpus. The rationale for choosing this domain is to (i) to perform large scale comparisons of synthetic and natural parallel corpora; (ii) to study the effect of BT in a well-defined domain-adaptation scenario. For both language pairs, we use the Europarl tests from 2007 and 2008 for evaluation purposes, keeping test 2006 for development. When measuring out-of-domain performance, we will use the WMT newstest 2014."
],
[
"Our baseline NMT system implements the attentional encoder-decoder approach BIBREF6 , BIBREF7 as implemented in Nematus BIBREF8 on 4 million out-of-domain parallel sentences. For French we use samples from News-Commentary-11 and Wikipedia from WMT 2014 shared translation task, as well as the Multi-UN BIBREF9 and EU-Bookshop BIBREF10 corpora. For German, we use samples from News-Commentary-11, Rapid, Common-Crawl (WMT 2017) and Multi-UN (see table TABREF5 ). Bilingual BPE units BIBREF11 are learned with 50k merge operations, yielding vocabularies of about respectively 32k and 36k for English INLINEFORM0 French and 32k and 44k for English INLINEFORM1 German.",
"Both systems use 512-dimensional word embeddings and a single hidden layer with 1024 cells. They are optimized using Adam BIBREF12 and early stopped according to the validation performance. Training lasted for about three weeks on an Nvidia K80 GPU card.",
"Systems generating back-translated data are trained using the same out-of-domain corpus, where we simply exchange the source and target sides. They are further documented in § SECREF8 . For the sake of comparison, we also train a system that has access to a large batch of in-domain parallel data following the strategy often referred to as “fine-tuning”: upon convergence of the baseline model, we resume training with a 2M sentence in-domain corpus mixed with an equal amount of randomly selected out-of-domain natural sentences, with the same architecture and training parameters, running validation every 2000 updates with a patience of 10. Since BPE units are selected based only on the out-of-domain statistics, fine-tuning is performed on sentences that are slightly longer (ie. they contain more units) than for the initial training. This system defines an upper-bound of the translation performance and is denoted below as natural.",
"Our baseline and topline results are in Table TABREF6 , where we measure translation performance using BLEU BIBREF13 , BEER BIBREF14 (higher is better) and characTER BIBREF15 (smaller is better). As they are trained from much smaller amounts of data than current systems, these baselines are not quite competitive to today's best system, but still represent serious baselines for these datasets. Given our setups, fine-tuning with in-domain natural data improves BLEU by almost 4 points for both translation directions on in-domain tests; it also improves, albeit by a smaller margin, the BLEU score of the out-of-domain tests."
],
[
"A simple way to use monolingual data in MT is to turn it into synthetic parallel data and let the training procedure run as usual BIBREF16 . In this section, we explore various ways to implement this strategy. We first reproduce results of BIBREF2 with BT of various qualities, that we then analyze thoroughly."
],
[
"BT requires the availability of an MT system in the reverse translation direction. We consider here three MT systems of increasing quality:",
"backtrans-bad: this is a very poor SMT system trained using only 50k parallel sentences from the out-of-domain data, and no additional monolingual data. For this system as for the next one, we use Moses BIBREF17 out-of-the-box, computing alignments with Fastalign BIBREF18 , with a minimal pre-processing (basic tokenization). This setting provides us with a pessimistic estimate of what we could get in low-resource conditions.",
"backtrans-good: these are much larger SMT systems, which use the same parallel data as the baseline NMTs (see § SECREF4 ) and all the English monolingual data available for the WMT 2017 shared tasks, totalling approximately 174M sentences. These systems are strong, yet relatively cheap to build.",
"backtrans-nmt: these are the best NMT systems we could train, using settings that replicate the forward translation NMTs.",
"Note that we do not use any in-domain (Europarl) data to train these systems. Their performance is reported in Table TABREF7 , where we observe a 12 BLEU points gap between the worst and best systems (for both languages).",
"As noted eg. in BIBREF19 , BIBREF20 , artificial parallel data obtained through forward-translation (FT) can also prove advantageous and we also consider a FT system (fwdtrans-nmt): in this case the target side of the corpus is artificial and is generated using the baseline NMT applied to a natural source.",
"Our results (see Table TABREF6 ) replicate the findings of BIBREF2 : large gains can be obtained from BT (nearly INLINEFORM0 BLEU in French and German); better artificial data yields better translation systems. Interestingly, our best Moses system is almost as good as the NMT and an order of magnitude faster to train. Improvements obtained with the bad system are much smaller; contrary to the better MTs, this system is even detrimental for the out-of-domain test.",
"Gains with forward translation are significant, as in BIBREF21 , albeit about half as good as with BT, and result in small improvements for the in-domain and for the out-of-domain tests. Experiments combining forward and backward translation (backfwdtrans-nmt), each using a half of the available artificial data, do not outperform the best BT results.",
"We finally note the large remaining difference between BT data and natural data, even though they only differ in their source side. This shows that at least in our domain-adaptation settings, BT does not really act as a regularizer, contrarily to the findings of BIBREF4 , BIBREF11 . Figure FIGREF13 displays the learning curves of these two systems. We observe that backtrans-nmt improves quickly in the earliest updates and then stays horizontal, whereas natural continues improving, even after 400k updates. Therefore BT does not help to avoid overfitting, it actually encourages it, which may be due “easier” training examples (cf. § SECREF15 )."
],
[
"Comparing the natural and artificial sources of our parallel data wrt. several linguistic and distributional properties, we observe that (see Fig. FIGREF21 - FIGREF22 ):",
"artificial sources are on average shorter than natural ones: when using BT, cases where the source is shorter than the target are rarer; cases when they have the same length are more frequent.",
"automatic word alignments between artificial sources tend to be more monotonic than when using natural sources, as measured by the average Kendall INLINEFORM0 of source-target alignments BIBREF22 : for French-English the respective numbers are 0.048 (natural) and 0.018 (artificial); for German-English 0.068 and 0.053. Using more monotonic sentence pairs turns out to be a facilitating factor for NMT, as also noted by BIBREF20 .",
"syntactically, artificial sources are simpler than real data; We observe significant differences in the distributions of tree depths.",
"distributionally, plain word occurrences in artificial sources are more concentrated; this also translates into both a slower increase of the number of types wrt. the number of sentences and a smaller number of rare events.",
"The intuition is that properties (i) and (ii) should help translation as compared to natural source, while property (iv) should be detrimental. We checked (ii) by building systems with only 10M words from the natural parallel data selecting these data either randomly or based on the regularity of their word alignments. Results in Table TABREF23 show that the latter is much preferable for the overall performance. This might explain that the mostly monotonic BT from Moses are almost as good as the fluid BT from NMT and that both boost the baseline."
],
[
"We now analyze the effect of using much simpler data generation schemes, which do not require the availability of a backward translation engine."
],
[
"We use the following cheap ways to generate pseudo-source texts:",
"copy: in this setting, the source side is a mere copy of the target-side data. Since the source vocabulary of the NMT is fixed, copying the target sentences can cause the occurrence of OOVs. To avoid this situation, BIBREF24 decompose the target words into source-side units to make the copy look like source sentences. Each OOV found in the copy is split into smaller units until all the resulting chunks are in the source vocabulary.",
"copy-marked: another way to integrate copies without having to deal with OOVs is to augment the source vocabulary with a copy of the target vocabulary. In this setup, BIBREF25 ensure that both vocabularies never overlap by marking the target word copies with a special language identifier. Therefore the English word resume cannot be confused with the homographic French word, which is marked @fr@resume.",
"copy-dummies: instead of using actual copies, we replace each word with “dummy” tokens. We use this unrealistic setup to observe the training over noisy and hardly informative source sentences.",
"We then use the procedures described in § SECREF4 , except that the pseudo-source embeddings in the copy-marked setup are pretrained for three epochs on the in-domain data, while all remaining parameters are frozen. This prevents random parameters from hurting the already trained model."
],
[
"We observe that the copy setup has only a small impact on the English-French system, for which the baseline is already strong. This is less true for English-German where simple copies yield a significant improvement. Performance drops for both language pairs in the copy-dummies setup.",
"We achieve our best gains with the copy-marked setup, which is the best way to use a copy of the target (although the performance on the out-of-domain tests is at most the same as the baseline). Such gains may look surprising, since the NMT model does not need to learn to translate but only to copy the source. This is indeed what happens: to confirm this, we built a fake test set having identical source and target side (in French). The average cross-entropy for this test set is 0.33, very close to 0, to be compared with an average cost of 58.52 when we process an actual source (in English). This means that the model has learned to copy words from source to target with no difficulty, even for sentences not seen in training. A follow-up question is whether training a copying task instead of a translation task limits the improvement: would the NMT learn better if the task was harder? To measure this, we introduce noise in the target sentences copied onto the source, following the procedure of BIBREF26 : it deletes random words and performs a small random permutation of the remaining words. Results (+ Source noise) show no difference for the French in-domain test sets, but bring the out-of-domain score to the level of the baseline. Finally, we observe a significant improvement on German in-domain test sets, compared to the baseline (about +1.5 BLEU). This last setup is even almost as good as the backtrans-nmt condition (see § SECREF8 ) for German. This shows that learning to reorder and predict missing words can more effectively serve our purposes than simply learning to copy."
],
[
"Integrating monolingual data into NMT can be as easy as copying the target into the source, which already gives some improvement; adding noise makes things even better. We now study ways to make pseudo-sources look more like natural data, using the framework of Generative Adversarial Networks (GANs) BIBREF27 , an idea borrowed from BIBREF26 ."
],
[
"In our setups, we use a marked target copy, viewed as a fake source, which a generator encodes so as to fool a discriminator trained to distinguish a fake from a natural source. Our architecture contains two distinct encoders, one for the natural source and one for the pseudo-source. The latter acts as the generator ( INLINEFORM0 ) in the GAN framework, computing a representation of the pseudo-source that is then input to a discriminator ( INLINEFORM1 ), which has to sort natural from artificial encodings. INLINEFORM2 assigns a probability of a sentence being natural.",
"During training, the cost of the discriminator is computed over two batches, one with natural (out-of-domain) sentences INLINEFORM0 and one with (in-domain) pseudo-sentences INLINEFORM1 . The discriminator is a bidirectional-Recurrent Neural Network (RNN) of dimension 1024. Averaged states are passed to a single feed-forward layer, to which a sigmoid is applied. It inputs encodings of natural ( INLINEFORM2 ) and pseudo-sentences ( INLINEFORM3 ) and is trained to optimize:",
" INLINEFORM0 ",
"",
" INLINEFORM0 's parameters are updated to maximally fool INLINEFORM1 , thus the loss INLINEFORM2 : INLINEFORM3 ",
"Finally, we keep the usual MT objective. ( INLINEFORM0 is a real or pseudo-sentence): INLINEFORM1 ",
"We thus need to train three sets of parameters: INLINEFORM0 and INLINEFORM1 (MT parameters), with INLINEFORM2 . The pseudo-source encoder and embeddings are updated wrt. both INLINEFORM3 and INLINEFORM4 . Following BIBREF28 , INLINEFORM5 is updated only when INLINEFORM6 's accuracy exceeds INLINEFORM7 . On the other hand, INLINEFORM8 is not updated when its accuracy exceeds INLINEFORM9 . At each update, two batches are generated for each type of data, which are encoded with the real or pseudo-encoder. The encoder outputs serve to compute INLINEFORM10 and INLINEFORM11 . Finally, the pseudo-source is encoded again (once INLINEFORM12 is updated), both encoders are plugged into the translation model and the MT cost is back-propagated down to the real and pseudo-word embeddings. Pseudo-encoder and discriminator parameters are pre-trained for 10k updates. At test time, the pseudo-encoder is ignored and inference is run as usual."
],
[
"Results are in Table TABREF32 , assuming the same fine-tuning procedure as above. On top of the copy-marked setup, our GANs do not provide any improvement in both language pairs, with the exception of a small improvement for English-French on the out-of-domain test, which we understand as a sign that the model is more robust to domain variations, just like when adding pseudo-source noise. When combined with noise, the French model yields the best performance we could obtain with stupid BT on the in-domain tests, at least in terms of BLEU and BEER. On the News domain, we remain close to the baseline level, with slight improvements in German.",
"A first observation is that this method brings stupid BT models closer to conventional BT, at a greatly reduced computational cost. While French still remains 0.4 to 1.0 BLEU below very good backtranslation, both approaches are in the same ballpark for German - may be because BTs are better for the former system than for the latter.",
"Finally note that the GAN architecture has two differences with basic copy-marked: (a) a distinct encoder for real and pseudo-sentence; (b) a different training regime for these encoders. To sort out the effects of (a) and (b), we reproduce the GAN setup with BT sentences, instead of copies. Using a separate encoder for the pseudo-source in the backtrans-nmt setup can be detrimental to performance (see Table TABREF32 ): translation degrades in French for all metrics. Adding GANs on top of the pseudo-encoder was not able to make up for the degradation observed in French, but allowed the German system to slightly outperform backtrans-nmt. Even though this setup is unrealistic and overly costly, it shows that GANs are actually helping even good systems."
],
[
"In this section, we compare the previous methods with the use of a target side Language Model (LM). Several proposals exist in the literature to integrate LMs in NMT: for instance, BIBREF3 strengthen the decoder by integrating an extra, source independent, RNN layer in a conventional NMT architecture. Training is performed either with parallel, or monolingual data. In the latter case, word prediction only relies on the source independent part of the network."
],
[
"We have followed BIBREF29 and reimplemented their deep-fusion technique. It requires to first independently learn a RNN-LM on the in-domain target data with a cross-entropy objective; then to train the optimal combination of the translation and the language models by adding the hidden state of the RNN-LM as an additional input to the softmax layer of the decoder. Our RNN-LMs are trained using dl4mt with the target side of the parallel data and the Europarl corpus (about 6M sentences for both French and German), using a one-layer GRU with the same dimension as the MT decoder (1024)."
],
[
"Results are in Table TABREF33 . They show that deep-fusion hardly improves the Europarl results, while we obtain about +0.6 BLEU over the baseline on newstest-2014 for both languages. deep-fusion differs from stupid BT in that the model is not directly optimized on the in-domain data, but uses the LM trained on Europarl to maximize the likelihood of the out-of-domain training data. Therefore, no specific improvement is to be expected in terms of domain adaptation, and the performance increases in the more general domain. Combining deep-fusion and copy-marked + noise + GANs brings slight improvements on the German in-domain test sets, and performance out of the domain remains near the baseline level."
],
[
"As a follow up of previous discussions, we analyze the effect of BT on the internals of the network. Arguably, using a copy of the target sentence instead of a natural source should not be helpful for the encoder, but is it also the case with a strong BT? What are the effects on the attention model?"
],
[
"To investigate these questions, we run the same fine-tuning using the copy-marked, backtrans-nmt and backtrans-nmt setups. Note that except for the last one, all training scenarios have access to same target training data. We intend to see whether the overall performance of the NMT system degrades when we selectively freeze certain sets of parameters, meaning that they are not updated during fine-tuning."
],
[
"BLEU scores are in Table TABREF39 . The backtrans-nmt setup is hardly impacted by selective updates: updating the only decoder leads to a degradation of at most 0.2 BLEU. For copy-marked, we were not able to freeze the source embeddings, since these are initialized when fine-tuning begins and therefore need to be trained. We observe that freezing the encoder and/or the attention parameters has no impact on the English-German system, whereas it slightly degrades the English-French one. This suggests that using artificial sources, even of the poorest quality, has a positive impact on all the components of the network, which makes another big difference with the LM integration scenario.",
"The largest degradation is for natural, where the model is prevented from learning from informative source sentences, which leads to a decrease of 0.4 to over 1.0 BLEU. We assume from these experiments that BT impacts most of all the decoder, and learning to encode a pseudo-source, be it a copy or an actual back-translation, only marginally helps to significantly improve the quality. Finally, in the fwdtrans-nmt setup, freezing the decoder does not seem to harm learning with a natural source."
],
[
"The literature devoted to the use of monolingual data is large, and quickly expanding. We already alluded to several possible ways to use such data: using back- or forward-translation or using a target language model. The former approach is mostly documented in BIBREF2 , and recently analyzed in BIBREF19 , which focus on fully artificial settings as well as pivot-based artificial data; and BIBREF4 , which studies the effects of increasing the size of BT data. The studies of BIBREF20 , BIBREF19 also consider forward translation and BIBREF21 expand these results to domain adaptation scenarios. Our results are complementary to these earlier studies.",
"As shown above, many alternatives to BT exist. The most obvious is to use target LMs BIBREF3 , BIBREF29 , as we have also done here; but attempts to improve the encoder using multi-task learning also exist BIBREF30 .",
"This investigation is also related to recent attempts to consider supplementary data with a valid target side, such as multi-lingual NMT BIBREF31 , where source texts in several languages are fed in the same encoder-decoder architecture, with partial sharing of the layers. This is another realistic scenario where additional resources can be used to selectively improve parts of the model.",
"Round trip training is another important source of inspiration, as it can be viewed as a way to use BT to perform semi-unsupervised BIBREF32 or unsupervised BIBREF33 training of NMT. The most convincing attempt to date along these lines has been proposed by BIBREF26 , who propose to use GANs to mitigate the difference between artificial and natural data."
],
[
"In this paper, we have analyzed various ways to integrate monolingual data in an NMT framework, focusing on their impact on quality and domain adaptation. While confirming the effectiveness of BT, our study also proposed significantly cheaper ways to improve the baseline performance, using a slightly modified copy of the target, instead of its full BT. When no high quality BT is available, using GANs to make the pseudo-source sentences closer to natural source sentences is an efficient solution for domain adaptation.",
"To recap our answers to our initial questions: the quality of BT actually matters for NMT (cf. § SECREF8 ) and it seems that, even though artificial source are lexically less diverse and syntactically complex than real sentence, their monotonicity is a facilitating factor. We have studied cheaper alternatives and found out that copies of the target, if properly noised (§ SECREF4 ), and even better, if used with GANs, could be almost as good as low quality BTs (§ SECREF5 ): BT is only worth its cost when good BT can be generated. Finally, BT seems preferable to integrating external LM - at least in our data condition (§ SECREF6 ). Further experiments with larger LMs are needed to confirm this observation, and also to evaluate the complementarity of both strategies. More work is needed to better understand the impact of BT on subparts of the network (§ SECREF7 ).",
"In future work, we plan to investigate other cheap ways to generate artificial data. The experimental setup we proposed may also benefit from a refining of the data selection strategies to focus on the most useful monolingual sentences."
]
],
"section_name": [
"Introduction ",
"In-domain and out-of-domain data ",
"NMT setups and performance ",
"Using artificial parallel data in NMT ",
"The quality of Back-Translation ",
"Properties of back-translated data ",
"Stupid Back-Translation ",
"Setups",
"Copy+marking+noise is not so stupid",
"Towards more natural pseudo-sources ",
"GAN setups",
"GANs can help",
"Using Target Language Models ",
"LM Setup",
"LM Results",
"Re-analyzing the effects of BT ",
"Parameter freezing protocol",
"Results",
"Related work",
"Conclusion "
]
} | {
"answers": [
{
"annotation_id": [
"659d29bf46e636815ab82af8773e573308a4832a",
"febb9a96151935a0b9e344260fc38bd1a75adead"
],
"answer": [
{
"evidence": [
"In this paper, we have analyzed various ways to integrate monolingual data in an NMT framework, focusing on their impact on quality and domain adaptation. While confirming the effectiveness of BT, our study also proposed significantly cheaper ways to improve the baseline performance, using a slightly modified copy of the target, instead of its full BT. When no high quality BT is available, using GANs to make the pseudo-source sentences closer to natural source sentences is an efficient solution for domain adaptation."
],
"extractive_spans": [],
"free_form_answer": "They use a slightly modified copy of the target to create the pseudo-text instead of full BT to make their technique cheaper",
"highlighted_evidence": [
"While confirming the effectiveness of BT, our study also proposed significantly cheaper ways to improve the baseline performance, using a slightly modified copy of the target, instead of its full BT. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We now analyze the effect of using much simpler data generation schemes, which do not require the availability of a backward translation engine."
],
"extractive_spans": [],
"free_form_answer": "They do not require the availability of a backward translation engine.",
"highlighted_evidence": [
"We now analyze the effect of using much simpler data generation schemes, which do not require the availability of a backward translation engine."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"6a2d9af23bad8e1bbd347521b9ec5d2f4199ec6f",
"bff350b52e142d8c9cd06fef983678a8ff5a98b5"
],
"answer": [
{
"evidence": [
"We use the following cheap ways to generate pseudo-source texts:",
"copy: in this setting, the source side is a mere copy of the target-side data. Since the source vocabulary of the NMT is fixed, copying the target sentences can cause the occurrence of OOVs. To avoid this situation, BIBREF24 decompose the target words into source-side units to make the copy look like source sentences. Each OOV found in the copy is split into smaller units until all the resulting chunks are in the source vocabulary.",
"copy-marked: another way to integrate copies without having to deal with OOVs is to augment the source vocabulary with a copy of the target vocabulary. In this setup, BIBREF25 ensure that both vocabularies never overlap by marking the target word copies with a special language identifier. Therefore the English word resume cannot be confused with the homographic French word, which is marked @fr@resume.",
"copy-dummies: instead of using actual copies, we replace each word with “dummy” tokens. We use this unrealistic setup to observe the training over noisy and hardly informative source sentences."
],
"extractive_spans": [
"copy",
"copy-marked",
"copy-dummies"
],
"free_form_answer": "",
"highlighted_evidence": [
"We use the following cheap ways to generate pseudo-source texts:\n\ncopy: in this setting, the source side is a mere copy of the target-side data.",
"copy-marked: another way to integrate copies without having to deal with OOVs is to augment the source vocabulary with a copy of the target vocabulary. ",
"copy-dummies: instead of using actual copies, we replace each word with “dummy” tokens. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We use the following cheap ways to generate pseudo-source texts:",
"copy: in this setting, the source side is a mere copy of the target-side data. Since the source vocabulary of the NMT is fixed, copying the target sentences can cause the occurrence of OOVs. To avoid this situation, BIBREF24 decompose the target words into source-side units to make the copy look like source sentences. Each OOV found in the copy is split into smaller units until all the resulting chunks are in the source vocabulary.",
"copy-marked: another way to integrate copies without having to deal with OOVs is to augment the source vocabulary with a copy of the target vocabulary. In this setup, BIBREF25 ensure that both vocabularies never overlap by marking the target word copies with a special language identifier. Therefore the English word resume cannot be confused with the homographic French word, which is marked @fr@resume.",
"copy-dummies: instead of using actual copies, we replace each word with “dummy” tokens. We use this unrealistic setup to observe the training over noisy and hardly informative source sentences."
],
"extractive_spans": [
"copy",
"copy-marked",
"copy-dummies"
],
"free_form_answer": "",
"highlighted_evidence": [
"We use the following cheap ways to generate pseudo-source texts:",
"copy: in this setting, the source side is a mere copy of the target-side data. ",
"copy-marked: another way to integrate copies without having to deal with OOVs is to augment the source vocabulary with a copy of the target vocabulary. ",
"copy-dummies: instead of using actual copies, we replace each word with “dummy” tokens. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"a45d430cb52f239d3abc2258abde78f6597c94cf"
],
"answer": [
{
"evidence": [
"Comparing the natural and artificial sources of our parallel data wrt. several linguistic and distributional properties, we observe that (see Fig. FIGREF21 - FIGREF22 ):",
"artificial sources are on average shorter than natural ones: when using BT, cases where the source is shorter than the target are rarer; cases when they have the same length are more frequent.",
"automatic word alignments between artificial sources tend to be more monotonic than when using natural sources, as measured by the average Kendall INLINEFORM0 of source-target alignments BIBREF22 : for French-English the respective numbers are 0.048 (natural) and 0.018 (artificial); for German-English 0.068 and 0.053. Using more monotonic sentence pairs turns out to be a facilitating factor for NMT, as also noted by BIBREF20 .",
"The intuition is that properties (i) and (ii) should help translation as compared to natural source, while property (iv) should be detrimental. We checked (ii) by building systems with only 10M words from the natural parallel data selecting these data either randomly or based on the regularity of their word alignments. Results in Table TABREF23 show that the latter is much preferable for the overall performance. This might explain that the mostly monotonic BT from Moses are almost as good as the fluid BT from NMT and that both boost the baseline."
],
"extractive_spans": [
"when using BT, cases where the source is shorter than the target are rarer; cases when they have the same length are more frequent",
"automatic word alignments between artificial sources tend to be more monotonic than when using natural sources"
],
"free_form_answer": "",
"highlighted_evidence": [
"Comparing the natural and artificial sources of our parallel data wrt. several linguistic and distributional properties, we observe that (see Fig. FIGREF21 - FIGREF22 ):\n\nartificial sources are on average shorter than natural ones: when using BT, cases where the source is shorter than the target are rarer; cases when they have the same length are more frequent.\n\nautomatic word alignments between artificial sources tend to be more monotonic than when using natural sources, as measured by the average Kendall INLINEFORM0 of source-target alignments BIBREF22 : for French-English the respective numbers are 0.048 (natural) and 0.018 (artificial); for German-English 0.068 and 0.053. ",
"The intuition is that properties (i) and (ii) should help translation as compared to natural source, while property (iv) should be detrimental."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"7cf71ab57c85410b1ff4f24aee0c4748153e7715",
"95911d3a38686b20a1c98ad74431d1ee31cfc399"
],
"answer": [
{
"evidence": [
"We are mostly interested with the following training scenario: a large out-of-domain parallel corpus, and limited monolingual in-domain data. We focus here on the Europarl domain, for which we have ample data in several languages, and use as in-domain training data the Europarl corpus BIBREF5 for two translation directions: English INLINEFORM0 German and English INLINEFORM1 French. As we study the benefits of monolingual data, most of our experiments only use the target side of this corpus. The rationale for choosing this domain is to (i) to perform large scale comparisons of synthetic and natural parallel corpora; (ii) to study the effect of BT in a well-defined domain-adaptation scenario. For both language pairs, we use the Europarl tests from 2007 and 2008 for evaluation purposes, keeping test 2006 for development. When measuring out-of-domain performance, we will use the WMT newstest 2014.",
"Our baseline NMT system implements the attentional encoder-decoder approach BIBREF6 , BIBREF7 as implemented in Nematus BIBREF8 on 4 million out-of-domain parallel sentences. For French we use samples from News-Commentary-11 and Wikipedia from WMT 2014 shared translation task, as well as the Multi-UN BIBREF9 and EU-Bookshop BIBREF10 corpora. For German, we use samples from News-Commentary-11, Rapid, Common-Crawl (WMT 2017) and Multi-UN (see table TABREF5 ). Bilingual BPE units BIBREF11 are learned with 50k merge operations, yielding vocabularies of about respectively 32k and 36k for English INLINEFORM0 French and 32k and 44k for English INLINEFORM1 German."
],
"extractive_spans": [
"Europarl corpus ",
"WMT newstest 2014",
"News-Commentary-11",
"Wikipedia from WMT 2014",
"Multi-UN",
"EU-Bookshop",
"Rapid",
"Common-Crawl (WMT 2017)"
],
"free_form_answer": "",
"highlighted_evidence": [
"We focus here on the Europarl domain, for which we have ample data in several languages, and use as in-domain training data the Europarl corpus BIBREF5 for two translation directions: English INLINEFORM0 German and English INLINEFORM1 French.",
"When measuring out-of-domain performance, we will use the WMT newstest 2014.",
"or French we use samples from News-Commentary-11 and Wikipedia from WMT 2014 shared translation task, as well as the Multi-UN BIBREF9 and EU-Bookshop BIBREF10 corpora. For German, we use samples from News-Commentary-11, Rapid, Common-Crawl (WMT 2017) and Multi-UN (see table TABREF5 ). "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We are mostly interested with the following training scenario: a large out-of-domain parallel corpus, and limited monolingual in-domain data. We focus here on the Europarl domain, for which we have ample data in several languages, and use as in-domain training data the Europarl corpus BIBREF5 for two translation directions: English INLINEFORM0 German and English INLINEFORM1 French. As we study the benefits of monolingual data, most of our experiments only use the target side of this corpus. The rationale for choosing this domain is to (i) to perform large scale comparisons of synthetic and natural parallel corpora; (ii) to study the effect of BT in a well-defined domain-adaptation scenario. For both language pairs, we use the Europarl tests from 2007 and 2008 for evaluation purposes, keeping test 2006 for development. When measuring out-of-domain performance, we will use the WMT newstest 2014."
],
"extractive_spans": [],
"free_form_answer": "Europarl tests from 2006, 2007, 2008; WMT newstest 2014.",
"highlighted_evidence": [
"For both language pairs, we use the Europarl tests from 2007 and 2008 for evaluation purposes, keeping test 2006 for development. When measuring out-of-domain performance, we will use the WMT newstest 2014."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"11cc6febc54a56f040128308573899fbf0786a9d",
"a1f0b005fc0ced0358ecec94e1f6485df1356526"
],
"answer": [
{
"evidence": [
"We are mostly interested with the following training scenario: a large out-of-domain parallel corpus, and limited monolingual in-domain data. We focus here on the Europarl domain, for which we have ample data in several languages, and use as in-domain training data the Europarl corpus BIBREF5 for two translation directions: English INLINEFORM0 German and English INLINEFORM1 French. As we study the benefits of monolingual data, most of our experiments only use the target side of this corpus. The rationale for choosing this domain is to (i) to perform large scale comparisons of synthetic and natural parallel corpora; (ii) to study the effect of BT in a well-defined domain-adaptation scenario. For both language pairs, we use the Europarl tests from 2007 and 2008 for evaluation purposes, keeping test 2006 for development. When measuring out-of-domain performance, we will use the WMT newstest 2014."
],
"extractive_spans": [],
"free_form_answer": "English-German, English-French.",
"highlighted_evidence": [
"We focus here on the Europarl domain, for which we have ample data in several languages, and use as in-domain training data the Europarl corpus BIBREF5 for two translation directions: English INLINEFORM0 German and English INLINEFORM1 French."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We are mostly interested with the following training scenario: a large out-of-domain parallel corpus, and limited monolingual in-domain data. We focus here on the Europarl domain, for which we have ample data in several languages, and use as in-domain training data the Europarl corpus BIBREF5 for two translation directions: English INLINEFORM0 German and English INLINEFORM1 French. As we study the benefits of monolingual data, most of our experiments only use the target side of this corpus. The rationale for choosing this domain is to (i) to perform large scale comparisons of synthetic and natural parallel corpora; (ii) to study the effect of BT in a well-defined domain-adaptation scenario. For both language pairs, we use the Europarl tests from 2007 and 2008 for evaluation purposes, keeping test 2006 for development. When measuring out-of-domain performance, we will use the WMT newstest 2014."
],
"extractive_spans": [],
"free_form_answer": "English-German, English-French",
"highlighted_evidence": [
"We focus here on the Europarl domain, for which we have ample data in several languages, and use as in-domain training data the Europarl corpus BIBREF5 for two translation directions: English INLINEFORM0 German and English INLINEFORM1 French. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"9e04866dc9ac32d3161efa7a9639e8db77dda87c"
],
"answer": [
{
"evidence": [
"We are mostly interested with the following training scenario: a large out-of-domain parallel corpus, and limited monolingual in-domain data. We focus here on the Europarl domain, for which we have ample data in several languages, and use as in-domain training data the Europarl corpus BIBREF5 for two translation directions: English INLINEFORM0 German and English INLINEFORM1 French. As we study the benefits of monolingual data, most of our experiments only use the target side of this corpus. The rationale for choosing this domain is to (i) to perform large scale comparisons of synthetic and natural parallel corpora; (ii) to study the effect of BT in a well-defined domain-adaptation scenario. For both language pairs, we use the Europarl tests from 2007 and 2008 for evaluation purposes, keeping test 2006 for development. When measuring out-of-domain performance, we will use the WMT newstest 2014."
],
"extractive_spans": [
"English ",
"German",
"French"
],
"free_form_answer": "",
"highlighted_evidence": [
"We focus here on the Europarl domain, for which we have ample data in several languages, and use as in-domain training data the Europarl corpus BIBREF5 for two translation directions: English INLINEFORM0 German and English INLINEFORM1 French. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
}
],
"nlp_background": [
"",
"",
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
"",
"",
""
],
"question": [
"why are their techniques cheaper to implement?",
"what data simulation techniques were introduced?",
"what is their explanation for the effectiveness of back-translation?",
"what dataset is used?",
"what language pairs are explored?",
"what language is the data in?"
],
"question_id": [
"87bb3105e03ed6ac5abfde0a7ca9b8de8985663c",
"d9980676a83295dda37c20cfd5d58e574d0a4859",
"9225b651e0fed28d4b6261a9f6b443b52597e401",
"565189b672efee01d22f4fc6b73cd5287b2ee72c",
"b6f7fadaa1bb828530c2d6780289f12740229d84",
"7b9ca0e67e394f1674f0bcf1c53dfc2d474f8613"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
"",
"",
""
]
} | {
"caption": [
"Table 1: Size of parallel corpora",
"Table 2: Performance wrt. different BT qualities",
"Table 3: BLEU scores for (backward) translation into English",
"Figure 1: Learning curves from backtrans-nmt and natural. Artificial parallel data is more prone to overfitting than natural data.",
"Figure 2: Properties of pseudo-English data obtained with backtrans-nmt from French. The synthetic source contains shorter sentences (a) and slightly simpler syntax (b). The vocabulary growth wrt. an increasing number of observed sentences (c) and the token-type correlation (d) suggest that the natural source is lexically richer.",
"Table 4: Selection strategies for BT data (English-French)",
"Figure 3: Properties of pseudo-English data obtained with backtrans-nmt (back-translated from German). Tendencies similar to English-French can be observed and difference in syntax complexity is even more visible.",
"Table 5: Performance wrt. various stupid BTs",
"Table 6: Performance wrt. different GAN setups",
"Table 7: Deep-fusion model",
"Table 8: BLEU scores with selective parameter freezing"
],
"file": [
"2-Table1-1.png",
"3-Table2-1.png",
"3-Table3-1.png",
"4-Figure1-1.png",
"5-Figure2-1.png",
"5-Table4-1.png",
"6-Figure3-1.png",
"7-Table5-1.png",
"8-Table6-1.png",
"8-Table7-1.png",
"10-Table8-1.png"
]
} | [
"why are their techniques cheaper to implement?",
"what dataset is used?",
"what language pairs are explored?"
] | [
[
"1903.11437-Conclusion -0",
"1903.11437-Stupid Back-Translation -0"
],
[
"1903.11437-NMT setups and performance -0",
"1903.11437-In-domain and out-of-domain data -0"
],
[
"1903.11437-In-domain and out-of-domain data -0"
]
] | [
"They do not require the availability of a backward translation engine.",
"Europarl tests from 2006, 2007, 2008; WMT newstest 2014.",
"English-German, English-French"
] | 186 |
1909.10012 | Is change the only constant? Profile change perspective on #LokSabhaElections2019 | Users on Twitter are identified with the help of their profile attributes that consists of username, display name, profile image, to name a few. The profile attributes that users adopt can reflect their interests, belief, or thematic inclinations. Literature has proposed the implications and significance of profile attribute change for a random population of users. However, the use of profile attribute for endorsements and to start a movement have been under-explored. In this work, we consider #LokSabhaElections2019 as a movement and perform a large-scale study of the profile of users who actively made changes to profile attributes centered around #LokSabhaElections2019. We collect the profile metadata for 49.4M users for a period of 2 months from April 5, 2019 to June 5, 2019 amid #LokSabhaElections2019. We investigate how the profile changes vary for the influential leaders and their followers over the social movement. We further differentiate the organic and inorganic ways to show the political inclination from the prism of profile changes. We report how the addition of election campaign related keywords lead to spread of behavior contagion and further investigate it with respect to "Chowkidar Movement" in detail. | {
"paragraphs": [
[
"User profiles on social media platforms serve as a virtual introduction of the users. People often maintain their online profile space to reflect their likes and values. Further, how users maintain their profile helps them develop relationship with coveted audience BIBREF0. A user profile on Twitter is composed of several attributes with some of the most prominent ones being the profile name, screen name, profile image, location, description, followers count, and friend count. While the screen name, display name, profile image, and description identify the user, the follower and friend counts represent the user's social connectivity. Profile changes might represent identity choice at a small level. However, previous studies have shown that on a broader level, profile changes may be an indication of a rise in a social movement BIBREF1, BIBREF2.",
"In this work, we conduct a large-scale study of the profile change behavior of users on Twitter during the 2019 general elections in India. These elections are of interest from the perspective of social media analysis for many reasons. Firstly, the general elections in India were held in 7 phases from 11th April, 2019 to 19th May, 2019. Thus, the elections serve as a rich source of data for social movements and changing political alignments throughout the two months. Secondly, the increase in the Internet user-base and wider adoption of smartphones has made social media outreach an essential part of the broader election campaign. Thus, there exists a considerable volume of political discourse data that can be mined to gain interesting insights. Twitter was widely used for political discourse even during the 2014 general elections BIBREF3 and the number was bound to increase this year. A piece of evidence in favor of this argument is the enormous 49 million followers that Narendra Modi, the Prime Minister of India has on Twitter.",
"We believe profile attributes have significant potential to understand how social media played a vital role in the election campaign by political parties as well as the supporters during the #LokSabhaElections2019. A prominent example of the use of profile changes for political campaigning is the #MainBhiChowkidar campaign launched by the Bhartiya Janta Party (BJP) during the #LokSabhaElections2019. In Figure FIGREF2, we show the first occurrence of #MainBhiChowkidar campaign on Twitter which was launched in response to an opposition campaign called “Chowkidar chor hai” (The watchman is a thief). On 17th March 2019, the Indian Prime Minister Narendra Modi stepped up this campaign and added Chowkidar (Watchman) to his display name. This spurred off a social movement on Twitter with users from across the nation and political spectrum updating their profiles on Twitter by prefixing “Chowkidar” to their display name BIBREF4. We are thus interested in studying the different facets of this campaign, in particular, the name changes in the display name of accounts, verified, and others.",
"In Figure FIGREF3, we compare the Twitter profiles of the two leaders of national parties during Lok Sabha Election, 2019. The top profile belongs to @narendramodi (Narendra Modi), PM of India and member of the ruling party Bharatiya Janata Party (BJP), while the bottom profile belongs to @RahulGandhi (Rahul Gandhi), president of Indian National Congress (INC). Figure FIGREF3 shows a snapshot of both leaders before, during, and after the Lok Sabha Elections in 2019. The ruling leader showed brevity in the use of his profile information for political campaigning and changed his profile information during and after the elections, which was not the case with opposition.",
"Intending to understand the political discourse from the lens of profile attributes on social media, we address the following questions."
],
[
"To analyze the importance of user-profiles in elections, we need to distinguish between the profile change behavior of political accounts and follower accounts. This brings us to the first research question:",
"RQ1 [Comparison]: How often do political accounts and common user accounts change their profile information and which profile attributes are changed the most.",
"The political inclination of Twitter users can be classified based on the content of their past tweets and community detection methods BIBREF5. However, the same analysis has not been extended to the study of user-profiles in detail. We thus, study the following:",
"RQ2 [Political Inclination]: Do users reveal their political inclination on Twitter directly/indirectly using their user profiles only?",
"Behavior contagion is a phenomenon where individuals adopt a behavior if one of their opinion leaders adopts it BIBREF6. We analyze the profile change behavior of accounts based on the “Chowkidar” movement with the intent to find behavior contagion of an opinion leader. Thus, our third research question is:",
"RQ3 [Behavior Contagion]: Can profile change behavior be catered to behavior contagion effect where followers tend to adopt an opinion as their opinion leader adopts it?",
"We are interested in knowing the common attributes of the people who took part in the chowkidar movement. To characterize these users, we need to analyze the major topic patterns that these users engage in. Thus, we ask the question:",
"RQ4 [Topic Modelling]: What are the most discussed topics amongst the users that were part of the Chowkidar campaign?"
],
[
"In summary, our main contributions are:",
"We collect daily and weekly profile snapshots of $1,293$ political handles on Twitter and over 55 million of their followers respectively over a period of two months during and after the #LokSabhaElections2019. We will make the data and code available on our website.",
"We analyze the political and follower handles for profile changes over the snapshots and show that the political handles engage in more profile change behavior. We also show that certain profile attributes are changed more frequently than others with subtle differences in the behavior of political and follower accounts.",
"We show users display support for a political party on Twitter through both organic means like party name mentions on their profile, and inorganic means such as showing support for a movement like #MainBhiChowkidar. We analyze the Chowkidar movement in detail and demonstrate that it illustrates a contagion behavior.",
"We perform topic modelling of the users who took part in the Chowkidar movement and show that they were most likely Narendra Modi supporters and mostly discussed political topics on Twitter."
],
[
"Our data collection process is divided into two stages. We first manually curate a set of political accounts on Twitter. We use this set of users as a seed set and then collect the profile information of all the followers of these handles.",
"Seed set: To answer the RQ1, we first manually curate a set of $2,577$ Twitter handles of politicians and official/unofficial party handles involved in Lok Sabha Elections 2019. Let this set of users be called $\\mathbf {U}$. Of the $2,577$ curated handles, only $1,293$ were verified at the beginning of April. Let this set of verified handles be called $\\mathbf {P}$. We collect a snapshot of the profile information of $\\mathbf {P}$ daily for two months from 5th April - 5th June 2019.",
"Tracked set: We use the set $P$ of verified Twitter handles as a seed set and collect the profile information of all their followers. We hypothesize that the users who follow one or more political handles are more likely to express their political orientation on Twitter. Thus, we consider only the followers of all the political handles in our work. The total number of followers of all the handles in set $\\mathbf {P}$ are 600 million, of which only $55,830,844$ users are unique. Owing to Twitter API constraints, we collect snapshots of user-profiles for these handles only once a week over the two months of April 5 - June 5, 2019. There were a total of 9 snapshots, of which 7 coincided with the phases of the election. The exact dates on which the snapshots were taken has been mentioned in Table TABREF10. As we collect snapshots over the of two months, the number of followers of political handles in set $P$ can increase significantly. Similarly, a small subset of followers of handles in $P$ could also have been suspended/deleted. To maintain consistency in our analysis, we only use those handles for our analysis that are present in all the 9 snapshots. We call this set of users as $\\mathbf {S}$. The set $\\mathbf {S}$ consists of 49,433,640 unique accounts.",
"Dataset: Our dataset consists of daily and weekly snapshots of profile information of users in set $\\mathbf {S}$ and $\\mathbf {P}$ respectively. We utilize the Twitter API to collect the profile information of users in both the above sets. Twitter API returns a user object that consists of a total of 34 attributes like name, screen name, description, profile image, location, and followers. We further pre-process the dataset to remove all the handles that were inactive for a long time before proceeding with further analysis.",
"In set $\\mathbf {S}$, more than 15 million out of the total 49 million users have made no tweets and have marked no favourites as well. We call the remaining 34 million users as Active users. Of the Active users, only 5 million users have made a tweet in year 2019. The distribution of the last tweet time of all users in set $S$ is shown in Figure FIGREF14. The 5 million twitter users since the start of 2019 fall in favor of the argument that a minority of users are only responsible for most of the political opinions BIBREF7. We believe there is a correlation between Active users and users who make changes in profile attribute. We, therefore use the set of 34 million users(Active Users of set $\\mathbf {S}$ ) for our further analysis."
],
[
"In this section we characterize the profile change behavior of political accounts (Set $P$) and follower accounts (Set $S$)."
],
[
"Increase in the Internet user-base and widespread adoption of smartphones in the country has led to an increase in the importance of social media campaigns during the elections. This is evident by the fact that in our dataset (Set $U$), we found that a total of 54 handles got verified throughout 61 snapshots. Moreover the average number of followers of handles in set $P$ increased from $3,08,716.84$ to $3,19,881.09$ over the 61 snapshots. Given that the average followers of set $P$ increased by approximately $3\\%$ and 54 profile handles got verified in the span of 61 days, we believe that efforts of leaders on social media campaign can't be denied.",
"We consider 5 major profile attributes for further analysis. These attributes are username, display name, profile image, location and description respectively. The username is a unique handle or screen name associated with each twitter user that appears on the profile URL and is used to communicate with each other on Twitter. Some username examples are $@narendramodi$, $@RahulGandhi$ etc. The display name on the other hand, is merely a personal identifier that is displayed on the profile page of a user, e.g. `Narendra Modi', `Rahul Gandhi', etc. The description is a string to describe the account, while the location is user-defined profile location. We considered the 5 profile attributes as stated above since these attributes are key elements of a users identity and saliently define users likes and values BIBREF2.",
"For the set of users in $P$, the total number of changes in these 5 profile attributes over 61 snapshots is 974. Figure FIGREF12 shows the distribution of profile changes for specific attributes over all the 61 snapshots. The most changed attribute among the political handles is the profile image followed by display name. In the same Figure, there is a sharp increase in the number of changes to the display name attribute on 23rd May, 2019. The increase is because @narendramodi removed Chowkidar from the display name of his Twitter handle which resulted in many other political handles doing the same. Another interesting trend is that none of the handles changed their username over all the snapshots. This could be attributed to the fact that Twitter discourages a change of usernames for verified accounts as the verified status might not remain intact after the change BIBREF8."
],
[
"To characterize the profile changing behavior amongst the followers , we study the profile snapshots of all the users in set $S$ over 9 snapshots.",
"Of all the active user accounts, $1,363,499$ users made at least one change to their profile attributes over all the snapshots. Thus, over $3\\%$ of all the active users engaged in some form of profile change activity over the 9 snapshots. Of these $1,363,499$ users, more than $95\\%$ of the users made only 1 change to their profile, and around 275 users made more than 20 changes. On an average, each one of these $1,363,499$ users made about 2 changes to their profiles with a standard deviation of $1.42$.",
"In order to compare the profile change behavior of political accounts and follower accounts, we analyzed the profile changes of set $\\mathbf {P}$ over the 9 snapshots only. For the same 9 snapshots, $54.9\\%$ of all users in set $\\mathbf {P}$ made at least one change to their profile attributes. This is in sharp contrast to the the users of set $\\mathbf {S}$, where only $3\\%$ of users made any changes to their profile attributes. Similarly, users of set $\\mathbf {P}$ made $75.32\\%$ changes to their profiles on an average, which is 25 times more than the set $\\mathbf {S}$.",
"INSIGHT 1: Political handles are more likely to engage in profile changing behavior as compared to their followers.",
"In Figure FIGREF15, we plot the number of changes in given attributes over all the snapshots for the users in set $S$. From this plot, we find that not all the attributes are modified at an equal rate. Profile image and Location are the most changed profile attributes and account for nearly $34\\%$ and $25\\%$ respectively of the total profile changes in our dataset. We analyze the trends in Figure FIGREF12 and find that the political handles do not change their usernames at all. This is in contrast to the trend in Figure FIGREF15 where we see that there are a lot of handles that change their usernames multiple times. The most likely reason for the same is that most of the follower handles are not verified and would not loose their verified status on changing their username.",
"The follower handles also use their usernames to show support to a movement or person and revert back to their original username later. Out of the $79,991$ username changes, $1,711$ users of set ${S}$ went back and forth to the same name which included adding of Election related keywords. Examples of few such cases in our dataset are shown in Table TABREF16.",
"INSIGHT 2: Users do not change all profile attributes equally, and the profile attributes that political and follower handles focus on are different."
],
[
"Change in an attribute in the profile involves replacing an existing attribute with a new one. We analyze the similarity between the current, new display names and usernames used by users in set $\\mathbf {P}$ and $\\mathbf {S}$ respectively. For this analysis, we consider only the users who have changed their username or display name at least once in all 9 snapshots. We use the Longest Common Subsequence (LCS) to compute the similarity between two strings. Thus, for each user, we compute the average LCS score between all possible pairs of unique display names and usernames adopted by them. A high average LCS score indicates that the newer username or display name were not significantly different from the previous ones.",
"The graph in Figure FIGREF18 shows that nearly $80\\%$ of the users in set $S$ made more significant changes to their Display Names as compared to the political handles. (This can be seen by considering a horizontal line on the graph. For any given fraction of users, the average LCS score of set $S$ (Followers) is lesser than set $P$ (Politicians) for approximately $80\\%$ percent of users). For around $10\\%$ of the users in set $P$, the new and old profile attributes were unrelated (LCS score < $0.5$) and for users in set $S$, this value went up to $30\\%$.",
"INSIGHT 3: Political handles tend to make new changes related to previous attribute values. However, the followers make comparatively less related changes to previous attribute values."
],
[
"In this section, we first describe the phenomenon of mention of political parties names in the profile attributes of users. This is followed by the analysis of profiles that make specific mentions of political handles in their profile attributes. Both of these constitute an organic way of showing support to a party and does not involve any direct campaigning by the parties. We also study in detail the #MainBhiChowkidar campaign and analyze the corresponding change in profile attributes associated with it."
],
[
"To sustain our hypothesis of party mention being an organic behavior, we analyzed party name mentions in the profiles of users in set $\\mathbf {S}$. For this analysis, we focused on two of the major national parties in India, namely Indian National Congress (INC) and Bhartiya Janta Party (BJP). We specifically search for mentions of the party names and a few of its variants in the display name, username and description of the user profiles. We find that $50,432$ users have mentioned “BJP” (or its variants) on their description in the first snapshot of our data itself. This constitutes approximately $1\\%$ of the total number of active users in our dataset, implying a substantial presence of the party on Twitter. Moreover, $2,638$ users added “BJP” (or its variants) to their description over the course of data collection and $1,638$ users removed “BJP” from their description. Table TABREF20 shows the number of mentions of both parties in the different profile attributes."
],
[
"We observe that a lot of users in our dataset wrote that they are proud to be followed by a famous political handle in their description. We show example of such cases in Table TABREF22.",
"We perform an exhaustive search of all such mentions in the description attribute of the users and find $1,164$ instances of this phenomenon. This analysis and the party mention analysis in Section SECREF19, both testify that people often display their political inclination on Twitter via their profile attributes itself.",
"INSIGHT 4: Twitter users often display their political inclinations in their profile attributes itself and are pretty open about it."
],
[
"As discussed is Section SECREF1, @narendramodi added “Chowkidar” to his display name in response to an opposition campaign called “Chowkidar chor hai”. In a coordinated campaign, several other leaders and ministers of BJP also changed their Twitter profiles by adding the prefix Chowkidar to their display names. The movement, however, did not remain confined amongst the members of the party themselves and soon, several Twitter users updated their profiles as well. We opine that the whole campaign of adding Chowkidar to the profile attributes show an inorganic behavior, with political leaders acting as the catalyst. An interesting aspect of this campaign was the fact that the users used several different variants of the word Chowkidar while adding it to their Twitter profiles. Some of the most common variants of Chowkidar that were present in our dataset along with its frequency of use is shown in Figure FIGREF24.",
"We utilize a regular expression query to exhaustively search all the variants of the word Chowkidar in our dataset. We found $2,60,607$ instances of users in set $S$ added Chowkidar (or its variants) to their display names. These $2,60,607$ instances comprised a total of 241 unique chowkidar variants of which 225 have been used lesser than 500 times. We also perform the same analysis for the other profile attributes like description and username as well. The number of users who added Chowkidar to their description and username are $14,409$ and $12,651$ respectively. The union and intersection of all these users are $270,945$ and 727 respectively, implying that most users added Chowkidar to only one of their profile attributes.",
"We believe, the effect of changing the profile attribute in accordance with Prime Minister's campaign is an example of inorganic behavior contagion BIBREF6, BIBREF9. The authors in BIBREF6 argue that opinion diffuses easily in a network if it comes from opinion leaders who are considered to be users with a very high number of followers. We see a similar behavior contagion in our dataset with respect to the Chowkidar movement.",
"We analyze the similarity of the display names with respect to the behavior contagion of Chowkidar movement. In the CDF plot of Figure FIGREF18, a significant spike is observed in the region of LCS values between $0.6$-$0.8$. This spike is caused mostly due to the political handles, who added Chowkidar to their display names which accounted for $95.7\\%$ of the users in this region. In set $P$, a total of 373 people added the specific keyword Chowkidar to their display names of which 315 lie in the normalized LCS range of $0.6$ to $0.8$. We perform similar analysis on the users of set $S$ and find that around $57\\%$ of the users within the region of $0.6$-$0.8$ LCS added specific keyword Chowkidar to their display names. We perform similar analysis on other campaign keywords like Ab Hoga Nyay as well but none of them have the same level of influence or popularity like that of the “Chowkidar” and hence we call it as the “Chowkidar Movement”.",
"On 23rd May, Narendra Modi removed Chowkidar from his Twitter profile and urged others to do the same as shown in Figure FIGREF25. While most users followed Narendra Modi's instructions and removed Chowkidar from their profiles, some users still continued to add chowkidar to their names. The weekly addition and removal of Chowkidar from user profiles is presented in detail in Figure FIGREF26. Thus, the behavior contagion is evident from the fact that after Narendra Modi removed Chowkidar on 23rd May, majority of the population in set $\\mathbf {S}$ removed it too.",
"INSIGHT 5: The Chowkidar movement was very widespread and illustrated contagion behavior."
],
[
"A large number of people were part of the Chowkidar movement. Hence, it is important to gauge their level of interest/support to the party in order to accurately judge the impact of the campaign. We thus perform topic modelling of the description attribute of these users. We utilize the LDA Mallet model BIBREF10 for topic modelling after performing the relevant preprocessing steps on the data. We set the number of topics to 10 as it gave the highest coherence score in our experiments. The 10 topics and the 5 most important keywords belonging to these topics are shown below in Figure FIGREF27. These topics reveal several interesting insights. The most discussed topic amongst these users involves Narendra Modi and includes words like “namo” (slang for Narendra Modi), supporter, nationalist and “bhakt” (devout believer). A few other topics also involve politics like topic 2 and topic 5. Topic 2, in particular contains the words chowkidar, student, and study, indicating that a significant set of users who were part of this campaign were actually students. Topic number 0 accounts for more than $40\\%$ of the total descriptions, which essentially means that most of the people involved in this movement were supporting BJP or Narendra Modi. We also plot the topics associated with each user's description on a 2D plane by performing tSNE BIBREF11 on the 10 dimensional data to ascertain the variation in the type of topics. The topics are pretty distinct and well separated on the 2D plane as can be seen in Figure FIGREF30.",
"INSIGHT 6: The users who were part of the chowkidar movement were mostly Narendra Modi supporters and engaged in political topics"
],
[
"Social Media profiles are a way to present oneself and to cultivate relationship with like minded or desired audience. BIBREF0, BIBREF12. With change in time and interest, users on Twitter change their profile names, description, screen names, location or language BIBREF2. Some of these changes are organic BIBREF13, BIBREF2, while other changes can be triggered by certain events BIBREF14, BIBREF15. In this section, we look into the previous literature which are related to user profile attribute's change behavior. We further look into how Twitter has been used to initiate or sustain movements BIBREF14, BIBREF15, BIBREF1. Finally we investigate how the contagion behavior adaptation spread through opinion leaders BIBREF6, BIBREF9 and conclude with our work's contribution to the literature."
],
[
"Previous studies have focused on change in username attribute extensively, as usernames in Twitter helps in linkability, mentions and becomes part of the profile's URL BIBREF16, BIBREF13, BIBREF17. While the focus of few works is to understand the reason the users change their profile names BIBREF16, BIBREF2, others strongly argue username change is a bad idea BIBREF17, BIBREF13. Jain et al. argue that the main reason people opt to change their username is to leverage more tweet space. Similarly, the work by Mariconti et al. suggests that the change in username may be a result of name squatting, where a person may take away the username of a popular handle for malicious reasons. More recent studies in the area have focused on several profile attributes changes BIBREF2. They suggest that profile attributes can be used to represent oneself at micro-level as well as represent social movement at macro-level."
],
[
"The role of Social Media during social movements have been studied very extensively BIBREF14, BIBREF18, BIBREF19, BIBREF15. With the help of Social Media, participation and ease to express oneself becomes easy BIBREF1. There have been many studies where Twitter has been used to gather the insight of online users in the form of tweets BIBREF1, BIBREF14, BIBREF15. The authors in BIBREF14 study the tweets to understand how people reacted to decriminalization of LGBT in India. The study of Gezi Park protest shows how the protest took a political turn, as people changed their usernames to show support to their political leaders BIBREF1.",
"The success of the party “Alternative for Deutschland(AFD)” in German Federal Elections, 2017 can be catered to its large online presence BIBREF18. The authors in BIBREF18 study the tweets and retweets networks during the German elections and analyze how users are organized into communities and how information flows between them. The role of influential leaders during elections has also been studied in detail in studies like BIBREF20. This brings us to the study of how influential leaders contribute to spread and adaptation of events."
],
[
"The real and virtual world are intertwined in a way that any speech, event, or action of leaders may trigger the changes is behavior of people online BIBREF1, BIBREF21. Opinion leaders have been defined as `the individuals who were likely to influence other persons in their immediate environment.' BIBREF22 Opinion leaders on social media platforms help in adoption of behavior by followers easily BIBREF9, BIBREF6. The performance and effectiveness of the opinion leader directly affect the contagion of the leader BIBREF9. On Twitter, opinion leaders show high motivation for information seeking and public expression. Moreover, they make a significant contribution to people's political involvement BIBREF21.",
"Our work contributes to this literature in two major ways. First, we investigate how profile attribute changes are similar or different from the influential leaders in the country and how much of these changes are the result of the same. Then we see how the users change their profile information in the midst of election in the largest democracy in the world."
],
[
"In this paper, we study the use of profile attributes on Twitter as the major endorsement during #LokSabhaElectios2019. We identify $1,293$ verified political leader's profile and extract $49,433,640$ unique followers. We collect the daily and weekly snapshots of the political leaders and the follower handles from 5th April, 2019 to 5th June, 2019. With the constraint of Twitter API, we collected a total of 9 snapshots of the follower's profile and 61 snapshots of Political handles. We consider the 5 major attributes for the analysis, including, Profile Name, Username, Description, Profile Image, and Location.",
"Firstly, we analyze the most changed attributes among the 5 major attributes. We find that most of the users made at-most 4 changes during the election campaign, with profile image being the most changed attribute for both the political as well as follower handles. We also show that political handles made more changes to profile attributes than the follower handles. About $80\\%$ changes made by the followers in the period of data collection was related to #LokSabhaElectios2019. While the profile handles made changes, the changes mostly included going back and forth to same profile attribute, with the change in attribute being active support to a political party or political leader.",
"We argue that we can predict the political inclination of a user using just the profile attribute of the users. We further show that the presence of party name in the profile attribute can be considered as an organic behavior and signals support to a party. However, we argue that the addition of election campaign related keywords to the profile is a form of inorganic behavior. The inorganic behavior analysis falls inline with behavior contagion, where the followers tend to adapt to the behavior of their opinion leaders. The “Chowkidar movement” showed a similar effect in the #LokSabhaElectios2019, which was evident by how the other political leaders and followers of BJP party added chowkidar to their profile attributes after @narendramodi did. We thus, argue that people don't shy away from showing support to political parties through profile information.",
"We believe our analysis will be helpful to understand how social movement like election are perceived and sustained through profile attributes on social media. Our findings provide baseline data for study in the direction of election campaign and to further analyze people's perspective on elections."
]
],
"section_name": [
"Introduction",
"Introduction ::: Research Questions",
"Introduction ::: Contributions",
"Dataset Collection and Description",
"Profile Attribute Analysis",
"Profile Attribute Analysis ::: Political Handles",
"Profile Attribute Analysis ::: Follower Handles",
"Profile Attribute Analysis ::: Similarity analysis",
"Political support through profile meta data",
"Political support through profile meta data ::: Party mention in the profile attributes",
"Political support through profile meta data ::: Who follows who",
"Political support through profile meta data ::: The Chowkidar movement",
"Topic modelling of users",
"Related Work",
"Related Work ::: Profile attribute change behavior",
"Related Work ::: Social Media for online movements",
"Related Work ::: Adoption of behavior from leaders",
"Discussion"
]
} | {
"answers": [
{
"annotation_id": [
"2b958f19cb89ac620fc2e78a4158f6075423ebbf",
"dfc8cae06b095dfdecae134577d00e456fe79a84"
],
"answer": [
{
"evidence": [
"We consider 5 major profile attributes for further analysis. These attributes are username, display name, profile image, location and description respectively. The username is a unique handle or screen name associated with each twitter user that appears on the profile URL and is used to communicate with each other on Twitter. Some username examples are $@narendramodi$, $@RahulGandhi$ etc. The display name on the other hand, is merely a personal identifier that is displayed on the profile page of a user, e.g. `Narendra Modi', `Rahul Gandhi', etc. The description is a string to describe the account, while the location is user-defined profile location. We considered the 5 profile attributes as stated above since these attributes are key elements of a users identity and saliently define users likes and values BIBREF2."
],
"extractive_spans": [
"username",
"display name",
"profile image",
"location",
"description"
],
"free_form_answer": "",
"highlighted_evidence": [
"We consider 5 major profile attributes for further analysis. These attributes are username, display name, profile image, location and description respectively."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We consider 5 major profile attributes for further analysis. These attributes are username, display name, profile image, location and description respectively. The username is a unique handle or screen name associated with each twitter user that appears on the profile URL and is used to communicate with each other on Twitter. Some username examples are $@narendramodi$, $@RahulGandhi$ etc. The display name on the other hand, is merely a personal identifier that is displayed on the profile page of a user, e.g. `Narendra Modi', `Rahul Gandhi', etc. The description is a string to describe the account, while the location is user-defined profile location. We considered the 5 profile attributes as stated above since these attributes are key elements of a users identity and saliently define users likes and values BIBREF2."
],
"extractive_spans": [
"username, display name, profile image, location and description"
],
"free_form_answer": "",
"highlighted_evidence": [
"We consider 5 major profile attributes for further analysis. These attributes are username, display name, profile image, location and description respectively. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"194408635dae9ea3dd70772d75fc2014c59aaa7b",
"d3a8873be17ec5023776a77408e67c27358bad8f"
],
"answer": [
{
"evidence": [
"In this section, we first describe the phenomenon of mention of political parties names in the profile attributes of users. This is followed by the analysis of profiles that make specific mentions of political handles in their profile attributes. Both of these constitute an organic way of showing support to a party and does not involve any direct campaigning by the parties. We also study in detail the #MainBhiChowkidar campaign and analyze the corresponding change in profile attributes associated with it.",
"As discussed is Section SECREF1, @narendramodi added “Chowkidar” to his display name in response to an opposition campaign called “Chowkidar chor hai”. In a coordinated campaign, several other leaders and ministers of BJP also changed their Twitter profiles by adding the prefix Chowkidar to their display names. The movement, however, did not remain confined amongst the members of the party themselves and soon, several Twitter users updated their profiles as well. We opine that the whole campaign of adding Chowkidar to the profile attributes show an inorganic behavior, with political leaders acting as the catalyst. An interesting aspect of this campaign was the fact that the users used several different variants of the word Chowkidar while adding it to their Twitter profiles. Some of the most common variants of Chowkidar that were present in our dataset along with its frequency of use is shown in Figure FIGREF24.",
"We believe, the effect of changing the profile attribute in accordance with Prime Minister's campaign is an example of inorganic behavior contagion BIBREF6, BIBREF9. The authors in BIBREF6 argue that opinion diffuses easily in a network if it comes from opinion leaders who are considered to be users with a very high number of followers. We see a similar behavior contagion in our dataset with respect to the Chowkidar movement.",
"We argue that we can predict the political inclination of a user using just the profile attribute of the users. We further show that the presence of party name in the profile attribute can be considered as an organic behavior and signals support to a party. However, we argue that the addition of election campaign related keywords to the profile is a form of inorganic behavior. The inorganic behavior analysis falls inline with behavior contagion, where the followers tend to adapt to the behavior of their opinion leaders. The “Chowkidar movement” showed a similar effect in the #LokSabhaElectios2019, which was evident by how the other political leaders and followers of BJP party added chowkidar to their profile attributes after @narendramodi did. We thus, argue that people don't shy away from showing support to political parties through profile information."
],
"extractive_spans": [],
"free_form_answer": "Organic: mention of political parties names in the profile attributes, specific mentions of political handles in the profile attributes.\nInorganic: adding Chowkidar to the profile attributes, the effect of changing the profile attribute in accordance with Prime Minister's campaign, the addition of election campaign related keywords to the profile.",
"highlighted_evidence": [
"n this section, we first describe the phenomenon of mention of political parties names in the profile attributes of users. This is followed by the analysis of profiles that make specific mentions of political handles in their profile attributes. Both of these constitute an organic way of showing support to a party and does not involve any direct campaigning by the parties",
"We opine that the whole campaign of adding Chowkidar to the profile attributes show an inorganic behavior, with political leaders acting as the catalyst.",
"We believe, the effect of changing the profile attribute in accordance with Prime Minister's campaign is an example of inorganic behavior contagion BIBREF6, BIBREF9. ",
" However, we argue that the addition of election campaign related keywords to the profile is a form of inorganic behavior. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In this section, we first describe the phenomenon of mention of political parties names in the profile attributes of users. This is followed by the analysis of profiles that make specific mentions of political handles in their profile attributes. Both of these constitute an organic way of showing support to a party and does not involve any direct campaigning by the parties. We also study in detail the #MainBhiChowkidar campaign and analyze the corresponding change in profile attributes associated with it.",
"As discussed is Section SECREF1, @narendramodi added “Chowkidar” to his display name in response to an opposition campaign called “Chowkidar chor hai”. In a coordinated campaign, several other leaders and ministers of BJP also changed their Twitter profiles by adding the prefix Chowkidar to their display names. The movement, however, did not remain confined amongst the members of the party themselves and soon, several Twitter users updated their profiles as well. We opine that the whole campaign of adding Chowkidar to the profile attributes show an inorganic behavior, with political leaders acting as the catalyst. An interesting aspect of this campaign was the fact that the users used several different variants of the word Chowkidar while adding it to their Twitter profiles. Some of the most common variants of Chowkidar that were present in our dataset along with its frequency of use is shown in Figure FIGREF24."
],
"extractive_spans": [],
"free_form_answer": "Mentioning of political parties names and political twitter handles is the organic way to show political affiliation; adding Chowkidar or its variants to the profile is the inorganic way.",
"highlighted_evidence": [
"In this section, we first describe the phenomenon of mention of political parties names in the profile attributes of users. This is followed by the analysis of profiles that make specific mentions of political handles in their profile attributes. Both of these constitute an organic way of showing support to a party and does not involve any direct campaigning by the parties.",
"We opine that the whole campaign of adding Chowkidar to the profile attributes show an inorganic behavior, with political leaders acting as the catalyst. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"82aa603c8fa201f2f08a81f38be2a4cccbc928a9"
],
"answer": [
{
"evidence": [
"INSIGHT 1: Political handles are more likely to engage in profile changing behavior as compared to their followers.",
"In Figure FIGREF15, we plot the number of changes in given attributes over all the snapshots for the users in set $S$. From this plot, we find that not all the attributes are modified at an equal rate. Profile image and Location are the most changed profile attributes and account for nearly $34\\%$ and $25\\%$ respectively of the total profile changes in our dataset. We analyze the trends in Figure FIGREF12 and find that the political handles do not change their usernames at all. This is in contrast to the trend in Figure FIGREF15 where we see that there are a lot of handles that change their usernames multiple times. The most likely reason for the same is that most of the follower handles are not verified and would not loose their verified status on changing their username.",
"INSIGHT 3: Political handles tend to make new changes related to previous attribute values. However, the followers make comparatively less related changes to previous attribute values."
],
"extractive_spans": [],
"free_form_answer": "Influential leaders are more likely to change their profile attributes than their followers; the leaders do not change their usernames, while their followers change their usernames a lot; the leaders tend to make new changes related to previous attribute values, while the followers make comparatively less related changes to previous attribute values.",
"highlighted_evidence": [
"INSIGHT 1: Political handles are more likely to engage in profile changing behavior as compared to their followers.",
" We analyze the trends in Figure FIGREF12 and find that the political handles do not change their usernames at all. This is in contrast to the trend in Figure FIGREF15 where we see that there are a lot of handles that change their usernames multiple times. ",
"INSIGHT 3: Political handles tend to make new changes related to previous attribute values. However, the followers make comparatively less related changes to previous attribute values."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
}
],
"nlp_background": [
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"What profile metadata is used for this analysis?",
"What are the organic and inorganic ways to show political affiliation through profile changes?",
"How do profile changes vary for influential leads and their followers over the social movement?"
],
"question_id": [
"e374169ee10f835f660ab8403a5701114586f167",
"82595ca5d11e541ed0c3353b41e8698af40a479b",
"d4db7df65aa4ece63e1de813e5ce98ce1b4dbe7f"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"search_query": [
"twitter",
"twitter",
"twitter"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 2: Change in profile attributes of Narendra Modi vs Rahul Gandhi on Twitter. @narendramodi changes his profile attributes frequently as compared to@RahulGandhi on Twitter.",
"Table 1: Brief description of the data collection. U : seed set of Political handles, P : set of verified Political handles, S : set of unique Follower handles.",
"Figure 3: Distribution of profile changes for 5 attributes over 61 snapshots of set P . Profile image is the most changed attribute across the snapshots. A sharp increase in display name coincide with 23rd May, 2019 when @narendramodi removed Chowkidar from the display name.",
"Figure 5: Distribution of frequency of change for 5 attributes over the period of data collection as shown in Table 1 for set S .",
"Figure 4: Distribution of last tweet time of twitterer in set S . Only 5 million users tweeted since start of 2019, in favor of hypothesis that a minority of users are only responsible for most of the political opinions.",
"Table 2: Example of users who changed their username once and later reverted back to their old username.",
"Table 3: List for frequency of presence, addition and removal of party names to given profile attributes in the dataset.",
"Figure 6: Distribution of Display name change among set S and P . Higher Longest Comman Subsequence(LCS) score indicates the old andnewDisplay names were similar or related.",
"Table 4: Example of users who wrote followed by a famous politician in their description",
"Figure 7: Most popular variants of Chowkidar and their corresponding frequency of use.",
"Figure 8: The tweet in which Narendra Modi urges Twitter users to remove Chowkidar from their profiles.",
"Figure 9:Weekly addition and deletion of Chowkidar to profile attributes.",
"Figure 10: The most discussed topics in a descending order along with the 5most representative words of these topics.",
"Figure 11: Clusters of the topics on the 2D plane after performing tSNE on the data."
],
"file": [
"2-Figure2-1.png",
"3-Table1-1.png",
"4-Figure3-1.png",
"4-Figure5-1.png",
"4-Figure4-1.png",
"4-Table2-1.png",
"5-Table3-1.png",
"5-Figure6-1.png",
"5-Table4-1.png",
"6-Figure7-1.png",
"6-Figure8-1.png",
"7-Figure9-1.png",
"7-Figure10-1.png",
"8-Figure11-1.png"
]
} | [
"What are the organic and inorganic ways to show political affiliation through profile changes?",
"How do profile changes vary for influential leads and their followers over the social movement?"
] | [
[
"1909.10012-Discussion-2",
"1909.10012-Political support through profile meta data ::: The Chowkidar movement-0",
"1909.10012-Political support through profile meta data ::: The Chowkidar movement-2",
"1909.10012-Political support through profile meta data-0"
],
[
"1909.10012-Profile Attribute Analysis ::: Similarity analysis-2",
"1909.10012-Profile Attribute Analysis ::: Follower Handles-4",
"1909.10012-Profile Attribute Analysis ::: Follower Handles-3"
]
] | [
"Mentioning of political parties names and political twitter handles is the organic way to show political affiliation; adding Chowkidar or its variants to the profile is the inorganic way.",
"Influential leaders are more likely to change their profile attributes than their followers; the leaders do not change their usernames, while their followers change their usernames a lot; the leaders tend to make new changes related to previous attribute values, while the followers make comparatively less related changes to previous attribute values."
] | 188 |
1703.07090 | Deep LSTM for Large Vocabulary Continuous Speech Recognition | Recurrent neural networks (RNNs), especially long short-term memory (LSTM) RNNs, are effective network for sequential task like speech recognition. Deeper LSTM models perform well on large vocabulary continuous speech recognition, because of their impressive learning ability. However, it is more difficult to train a deeper network. We introduce a training framework with layer-wise training and exponential moving average methods for deeper LSTM models. It is a competitive framework that LSTM models of more than 7 layers are successfully trained on Shenma voice search data in Mandarin and they outperform the deep LSTM models trained by conventional approach. Moreover, in order for online streaming speech recognition applications, the shallow model with low real time factor is distilled from the very deep model. The recognition accuracy have little loss in the distillation process. Therefore, the model trained with the proposed training framework reduces relative 14\% character error rate, compared to original model which has the similar real-time capability. Furthermore, the novel transfer learning strategy with segmental Minimum Bayes-Risk is also introduced in the framework. The strategy makes it possible that training with only a small part of dataset could outperform full dataset training from the beginning. | {
"paragraphs": [
[
"Recently, deep neural network has been widely employed in various recognition tasks. Increasing the depth of neural network is a effective way to improve the performance, and convolutional neural network (CNN) has benefited from it in visual recognition task BIBREF0 . Deeper long short-term memory (LSTM) recurrent neural networks (RNNs) are also applied in large vocabulary continuous speech recognition (LVCSR) task, because LSTM networks have shown better performance than Fully-connected feed-forward deep neural network BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 .",
"Training neural network becomes more challenge when it goes deep. A conceptual tool called linear classifier probe is introduced to better understand the dynamics inside a neural network BIBREF5 . The discriminating features of linear classifier is the hidden units of a intermediate layer. For deep neural networks, it is observed that deeper layer's accuracy is lower than that of shallower layers. Therefore, the tool shows the difficulty of deep neural model training visually.",
"Layer-wise pre-training is a successful method to train very deep neural networks BIBREF6 . The convergence becomes harder with increasing the number of layers, even though the model is initialized with Xavier or its variants BIBREF7 , BIBREF8 . But the deeper network which is initialized with a shallower trained network could converge well.",
"The size of LVCSR training dataset goes larger and training with only one GPU becomes high time consumption inevitably. Therefore, parallel training with multi-GPUs is more suitable for LVCSR system. Mini-batch based stochastic gradient descent (SGD) is the most popular method in neural network training procedure. Asynchronous SGD is a successful effort for parallel training based on it BIBREF9 , BIBREF10 . It can many times speed up the training time without decreasing the accuracy. Besides, synchronous SGD is another effective effort, where the parameter server waits for every works to finish their computation and sent their local models to it, and then it sends updated model back to all workers BIBREF11 . Synchronous SGD converges well in parallel training with data parallelism, and is also easy to implement.",
"In order to further improve the performance of deep neural network with parallel training, several methods are proposed. Model averaging method achieves linear speedup, as the final model is averaged from all parameters of local models in different workers BIBREF12 , BIBREF13 , but the accuracy decreases compared with single GPU training. Moreover, blockwise model-updating filter (BMUF) provides another almost linear speedup approach with multi-GPUs on the basis of model averaging. It can achieve improvement or no-degradation of recognition performance compared with mini-batch SGD on single GPU BIBREF14 .",
"Moving averaged (MA) approaches are also proposed for parallel training. It is demonstrated that the moving average of the parameters obtained by SGD performs as well as the parameters that minimize the empirical cost, and moving average parameters can be used as the estimator of them, if the size of training data is large enough BIBREF15 . One pass learning is then proposed, which is the combination of learning rate schedule and averaged SGD using moving average BIBREF16 . Exponential moving average (EMA) is proposed as a non-interference method BIBREF17 . EMA model is not broadcasted to workers to update their local models, and it is applied as the final model of entire training process. EMA method is utilized with model averaging and BMUF to further decrease the character error rate (CER). It is also easy to implement in existing parallel training systems.",
"Frame stacking can also speed up the training time BIBREF18 . The super frame is stacked by several regular frames, and it contains the information of them. Thus, the network can see multiple frames at a time, as the super frame is new input. Frame stacking can also lead to faster decoding.",
"For streaming voice search service, it needs to display intermediate recognition results while users are still speaking. As a result, the system needs to fulfill high real-time requirement, and we prefer unidirectional LSTM network rather than bidirectional one. High real-time requirement means low real time factor (RTF), but the RTF of deep LSTM model is higher inevitably. The dilemma of recognition accuracy and real-time requirement is an obstacle to the employment of deep LSTM network. Deep model outperforms because it contains more knowledge, but it is also cumbersome. As a result, the knowledge of deep model can be distilled to a shallow model BIBREF19 . It provided a effective way to employ the deep model to the real-time system.",
"In this paper, we explore a entire deep LSTM RNN training framework, and employ it to real-time application. The deep learning systems benefit highly from a large quantity of labeled training data. Our first and basic speech recognition system is trained on 17000 hours of Shenma voice search dataset. It is a generic dataset sampled from diverse aspects of search queries. The requirement of speech recognition system also addressed by specific scenario, such as map and navigation task. The labeled dataset is too expensive, and training a new model with new large dataset from the beginning costs lots of time. Thus, it is natural to think of transferring the knowledge from basic model to new scenario's model. Transfer learning expends less data and less training time than full training. In this paper, we also introduce a novel transfer learning strategy with segmental Minimum Bayes-Risk (sMBR). As a result, transfer training with only 1000 hours data can match equivalent performance for full training with 7300 hours data.",
"Our deep LSTM training framework for LVCSR is presented in Section 2. Section 3 describes how the very deep models does apply in real world applications, and how to transfer the model to another task. The framework is analyzed and discussed in Section 4, and followed by the conclusion in Section 5."
],
[
"Gradient-based optimization of deep LSTM network with random initialization get stuck in poor solution easily. Xavier initialization can partially solve this problem BIBREF7 , so this method is the regular initialization method of all training procedure. However, it does not work well when it is utilized to initialize very deep model directly, because of vanishing or exploding gradients. Instead, layer-wise pre-training method is a effective way to train the weights of very deep architecture BIBREF6 , BIBREF20 . In layer-wise pre-training procedure, a one-layer LSTM model is firstly trained with normalized initialization. Sequentially, two-layers LSTM model's first layer is initialized by trained one-layer model, and its second layer is regularly initialized. In this way, a deep architecture is layer-by-layer trained, and it can converge well.",
"In conventional layer-wise pre-training, only parameters of shallower network are transfered to deeper one, and the learning targets are still the alignments generated by HMM-GMM system. The targets are vectors that only one state's probability is one, and the others' are zeros. They are known as hard targets, and they carry limited knowledge as only one state is active. In contrast, the knowledge of shallower network should be also transfered to deeper one. It is obtained by the softmax layer of existing model typically, so each state has a probability rather than only zero or one, and called as soft target. As a result, the deeper network which is student network learns the parameters and knowledge from shallower one which is called teacher network. When training the student network from the teacher network, the final alignment is the combination of hard target and soft target in our layer-wise training phase. The final alignment provides various knowledge which transfered from teacher network and extracted from true labels. If only soft target is learned, student network perform no better than teacher network, but it could outperform teacher network as it also learns true labels.",
"The deeper network spends less time to getting the same level of original network than the network trained from the beginning, as a period of low performance is skipped. Therefore, training with hard and soft target is a time saving method. For large training dataset, training with the whole dataset still spends too much time. A network firstly trained with only a small part of dataset could go deeper as well, and so the training time reducing rapidly. When the network is deep enough, it then trained on the entire dataset to get further improvement. There is no gap of accuracy between these two approaches, but latter one saves much time."
],
[
"The objects of conventional saturation check are gradients and the cell activations BIBREF4 . Gradients are clipped to range [-5, 5], while the cell activations clipped to range [-50, 50]. Apart from them, the differentials of recurrent layers is also limited. If the differentials go beyond the range, corresponding back propagation is skipped, while if the gradients and cell activations go beyond the bound, values are set as the boundary values. The differentials which are too large or too small will lead to the gradients easily vanishing, and it demonstrates the failure of this propagation. As a result, the parameters are not updated, and next propagation ."
],
[
"Cross-entropy (CE) is widely used in speech recognition training system as a frame-wise discriminative training criterion. However, it is not well suited to speech recognition, because speech recognition training is a sequential learning problem. In contrast, sequence discriminative training criterion has shown to further improve performance of neural network first trained with cross-entropy BIBREF21 , BIBREF22 , BIBREF23 . We choose state-level minimum bayes risk (sMBR) BIBREF21 among a number of sequence discriminative criterion is proposed, such as maximum mutual information (MMI) BIBREF24 and minimum phone error (MPE) BIBREF25 . MPE and sMBR are designed to minimize the expected error of different granularity of labels, while CE aims to minimizes expected frame error, and MMI aims to minimizes expected sentence error. State-level information is focused on by sMBR.",
"a frame-level accurate model is firstly trained by CE loss function, and then sMBR loss function is utilized for further training to get sequence-level accuracy. Only a part of training dataset is needed in sMBR training phase on the basis of whole dataset CE training."
],
[
"It is demonstrated that training with larger dataset can improve recognition accuracy. However, larger dataset means more training samples and more model parameters. Therefore, parallel training with multiple GPUs is essential, and it makes use of data parallelism BIBREF9 . The entire training data is partitioned into several split without overlapping and they are distributed to different GPUs. Each GPU trains with one split of training dataset locally. All GPUs synchronize their local models with model average method after a mini-batch optimization BIBREF12 , BIBREF13 .",
"Model average method achieves linear speedup in training phase, but the recognition accuracy decreases compared with single GPU training. Block-wise model updating filter (BMUF) is another successful effort in parallel training with linear speedup as well. It can achieve no-degradation of recognition accuracy with multi-GPUs BIBREF14 . In the model average method, aggregated model INLINEFORM0 is computed and broadcasted to GPUs. On the basis of it, BMUF proposed a novel model updating strategy: INLINEFORM1 INLINEFORM2 ",
"Where INLINEFORM0 denotes model update, and INLINEFORM1 is the global-model update. There are two parameters in BMUF, block momentum INLINEFORM2 , and block learning rate INLINEFORM3 . Then, the global model is updated as INLINEFORM4 ",
"Consequently, INLINEFORM0 is broadcasted to all GPUs to initial their local models, instead of INLINEFORM1 in model average method.",
"Averaged SGD is proposed to further accelerate the convergence speed of SGD. Averaged SGD leverages the moving average (MA) INLINEFORM0 as the estimator of INLINEFORM1 BIBREF15 : INLINEFORM2 ",
"Where INLINEFORM0 is computed by model averaging or BMUF. It is shown that INLINEFORM1 can well converge to INLINEFORM2 , with the large enough training dataset in single GPU training. It can be considered as a non-interference strategy that INLINEFORM3 does not participate the main optimization process, and only takes effect after the end of entire optimization. However, for the parallel training implementation, each INLINEFORM4 is computed by model averaging and BMUF with multiple models, and moving average model INLINEFORM5 does not well converge, compared with single GPU training.",
"Model averaging based methods are employed in parallel training of large scale dataset, because of their faster convergence, and especially no-degradation implementation of BMUF. But combination of model averaged based methods and moving average does not match the expectation of further enhance performance and it is presented as INLINEFORM0 ",
"The weight of each INLINEFORM0 is equal in moving average method regardless the effect of temporal order. But INLINEFORM1 closer to the end of training achieve higher accuracy in the model averaging based approach, and thus it should be with more proportion in final INLINEFORM2 . As a result, exponential moving average(EMA) is appropriate, which the weight for each older parameters decrease exponentially, and never reaching zero. After moving average based methods, the EMA parameters are updated recursively as INLINEFORM3 ",
"Here INLINEFORM0 represents the degree of weight decrease, and called exponential updating rate. EMA is also a non-interference training strategy that is implemented easily, as the updated model is not broadcasted. Therefore, there is no need to add extra learning rate updating approach, as it can be appended to existing training procedure directly."
],
[
"There is a high real time requirement in real world application, especially in online voice search system. Shenma voice search is one of the most popular mobile search engines in China, and it is a streaming service that intermediate recognition results displayed while users are still speaking. Unidirectional LSTM network is applied, rather than bidirectional one, because it is well suited to real-time streaming speech recognition."
],
[
"It is demonstrated that deep neural network architecture can achieve improvement in LVCSR. However, it also leads to much more computation and higher RTF, so that the recognition result can not be displayed in real time. It should be noted that deeper neural network contains more knowledge, but it is also cumbersome. the knowledge is key to improve the performance. If it can be transfered from cumbersome model to a small model, the recognition ability can also be transfered to the small model. Knowledge transferring to small model is called distillation BIBREF19 . The small model can perform as well as cumbersome model, after distilling. It provide a way to utilize high-performance but high RTF model in real time system. The class probability produced by the cumbersome model is regarded as soft target, and the generalization ability of cumbersome model is transfered to small model with it. Distillation is model's knowledge transferring approach, so there is no need to use the hard target, which is different with the layer-wise training method.",
"9-layers unidirectional LSTM model achieves outstanding performance, but meanwhile it is too computationally expensive to allow deployment in real time recognition system. In order to ensure real-time of the system, the number of layers needs to be reduced. The shallower network can learn the knowledge of deeper network with distillation. It is found that RTF of 2-layers network is acceptable, so the knowledge is distilled from 9-layers well-trained model to 2-layers model. Table TABREF16 shows that distillation from 9-layers to 2-layers brings RTF decrease of relative 53%, while CER only increases 5%. The knowledge of deep network is almost transfered with distillation, Distillation brings promising RTF reduction, but only little knowledge of deep network is lost. Moreover, CER of 2-layers distilled LSTM decreases relative 14%, compared with 2-layers regular-trained LSTM."
],
[
"For a certain specific scenario, the model trained with the data recorded from it has better adaptation than the model trained with generic scenario. But it spends too much time training a model from the beginning, if there is a well-trained model for generic scenarios. Moreover, labeling a large quantity of training data in new scenario is both costly and time consuming. If a model transfer trained with smaller dataset can obtained the similar recognition accuracy compared with the model directly trained with larger dataset, it is no doubt that transfer learning is more practical. Since specific scenario is a subset of generic scenario, some knowledge can be shared between them. Besides, generic scenario consists of various conditions, so its model has greater robustness. As a result, not only shared knowledge but also robustness can be transfered from the model of generic scenario to the model of specific one.",
"As the model well trained from generic scenario achieves good performance in frame level classification, sequence discriminative training is required to adapt new model to specific scenario additionally. Moreover, it does not need alignment from HMM-GMM system, and it also saves amount of time to prepare alignment."
],
[
"A large quantity of labeled data is needed for training a more accurate acoustic model. We collect the 17000 hours labeled data from Shenma voice search, which is one of the most popular mobile search engines in China. The dataset is created from anonymous online users' search queries in Mandarin, and all audio file's sampling rate is 16kHz, recorded by mobile phones. This dataset consists of many different conditions, such as diverse noise even low signal-to-noise, babble, dialects, accents, hesitation and so on.",
"In the Amap, which is one of the most popular web mapping and navigation services in China, users can search locations and navigate to locations they want though voice search. To present the performance of transfer learning with sequence discriminative training, the model trained from Shenma voice search which is greneric scenario transfer its knowledge to the model of Amap voice search. 7300 hours labeled data is collected in the similar way of Shenma voice search data collection.",
"Two dataset is divided into training set, validation set and test set separately, and the quantity of them is shown in Table TABREF10 . The three sets are split according to speakers, in order to avoid utterances of same speaker appearing in three sets simultaneously. The test sets of Shenma and Amap voice search are called Shenma Test and Amap Test."
],
[
"LSTM RNNs outperform conventional RNNs for speech recognition system, especially deep LSTM RNNs, because of its long-range dependencies more accurately for temporal sequence conditions BIBREF26 , BIBREF23 . Shenma and Amap voice search is a streaming service that intermediate recognition results displayed while users are still speaking. So as for online recognition in real time, we prefer unidirectional LSTM model rather than bidirectional one. Thus, the training system is unidirectional LSTM-based.",
"A 26-dimensional filter bank and 2-dimensional pitch feature is extracted for each frame, and is concatenated with first and second order difference as the final input of the network. The super frame are stacked by 3 frames without overlapping. The architecture we trained consists of two LSTM layers with sigmoid activation function, followed by a full-connection layer. The out layer is a softmax layer with 11088 hidden markov model (HMM) tied-states as output classes, the loss function is cross-entropy (CE). The performance metric of the system in Mandarin is reported with character error rate (CER). The alignment of frame-level ground truth is obtained by GMM-HMM system. Mini-batched SGD is utilized with momentum trick and the network is trained for a total of 4 epochs. The block learning rate and block momentum of BMUF are set as 1 and 0.9. 5-gram language model is leveraged in decoder, and the vocabulary size is as large as 760000. Differentials of recurrent layers is limited to range [-10000,10000], while gradients are clipped to range [-5, 5] and cell activations clipped to range [-50, 50]. After training with CE loss, sMBR loss is employed to further improve the performance.",
"It has shown that BMUF outperforms traditional model averaging method, and it is utilized at the synchronization phase. After synchronizing with BMUF, EMA method further updates the model in non-interference way. The training system is deployed on the MPI-based HPC cluster where 8 GPUs. Each GPU processes non-overlap subset split from the entire large scale dataset in parallel.",
"Local models from distributed workers synchronize with each other in decentralized way. In the traditional model averaging and BMUF method, a parameter server waits for all workers to send their local models, aggregate them, and send the updated model to all workers. Computing resource of workers is wasted until aggregation of the parameter server done. Decentralized method makes full use of computing resource, and we employ the MPI-based Mesh AllReduce method. It is mesh topology as shown in Figure FIGREF12 . There is no centralized parameter server, and peer to peer communication is used to transmit local models between workers. Local model INLINEFORM0 of INLINEFORM1 -th worker in INLINEFORM2 workers cluster is split to INLINEFORM3 pieces INLINEFORM4 , and send to corresponding worker. In the aggregation phase, INLINEFORM5 -th worker computed INLINEFORM6 splits of model INLINEFORM7 and send updated model INLINEFORM8 back to workers. As a result, all workers participate in aggregation and no computing resource is dissipated. It is significant to promote training efficiency, when the size of neural network model is too large. The EMA model is also updated additionally, but not broadcasting it."
],
[
"In order to evaluate our system, several sets of experiments are performed. The Shenma test set including about 9000 samples and Amap test set including about 7000 samples contain various real world conditions. It simulates the majority of user scenarios, and can well evaluates the performance of a trained model. Firstly, we show the results of models trained with EMA method. Secondly, for real world applications, very deep LSTM is distilled to a shallow one, so as for lower RTF. The model of Amap is also needed to train for map and navigation scenarios. The performance of transfer learning from Shenma voice search to Amap voice search is also presented."
],
[
"In layer-wise training, the deeper model learns both parameters and knowledge from the shallower model. The deeper model is initialized by the shallower one, and its alignment is the combination of hard target and soft target of shallower one. Two targets have the same weights in our framework. The teacher model is trained with CE. For each layer-wise trained CE model, corresponding sMBR model is also trained, as sMBR could achieve additional improvement. In our framework, 1000 hours data is randomly selected from the total dataset for sMBR training. There is no obvious performance enhancement when the size of sMBR training dataset increases.",
"For very deep unidirectional LSTM initialized with Xavier initialization algorithm, 6-layers model converges well, but there is no further improvement with increasing the number of layers. Therefore, the first 6 layers of 7-layers model is initialized by 6-layers model, and soft target is provided by 6-layers model. Consequently, deeper LSTM is also trained in the same way. It should be noticed that the teacher model of 9-layers model is the 8-layers model trained by sMBR, while the other teacher model is CE model. As shown in Table TABREF15 , the layer-wise trained models always performs better than the models with Xavier initialization, as the model is deep. Therefore, for the last layer training, we choose 8-layers sMBR model as the teacher model instead of CE model. A comparison between 6-layers and 9-layers sMBR models shows that 3 additional layers of layer-wise training brings relative 12.6% decreasing of CER. It is also significant that the averaged CER of sMBR models with different layers decreases absolute 0.73% approximately compared with CE models, so the improvement of sequence discriminative learning is promising."
],
[
"2-layers distilled model of Shenma voice search has shown a impressive performance on Shenma Test, and we call it Shenma model. It is trained for generic search scenario, but it has less adaptation for specific scenario like Amap voice search. Training with very large dataset using CE loss is regarded as improvement of frame level recognition accuracy, and sMBR with less dataset further improves accuracy as sequence discriminative training. If robust model of generic scenario is trained, there is no need to train a model with very large dataset, and sequence discriminative training with less dataset is enough. Therefore, on the basis of Shenma model, it is sufficient to train a new Amap model with small dataset using sMBR. As shown in Table TABREF18 , Shenma model presents the worst performance among three methods, since it does not trained for Amap scenario. 2-layers Shenma model further trained with sMBR achieves about 8.1% relative reduction, compared with 2-layers regular-trained Amap model. Both training sMBR datasets contain the same 1000 hours data. As a result, with the Shenma model, only about 14% data usage achieves lower CER, and it leads to great time and cost saving with less labeled data. Besides, transfer learning with sMBR does not use the alignment from the HMM-GMM system, so it also saves huge amount of time."
],
[
"We have presented a whole deep unidirectional LSTM parallel training system for LVCSR. The recognition performance improves when the network goes deep. Distillation makes it possible that deep LSTM model transfer its knowledge to shallow model with little loss. The model could be distilled to 2-layers model with very low RTF, so that it can display the immediate recognition results. As a result, its CER decrease relatively 14%, compared with the 2-layers regular trained model. In addition, transfer learning with sMBR is also proposed. If a great model has well trained from generic scenario, only 14% of the size of training dataset is needed to train a more accuracy acoustic model for specific scenario. Our future work includes 1) finding more effective methods to reduce the CER by increasing the number of layers; 2) applying this training framework to Connectionist Temporal Classification (CTC) and attention-based neural networks."
]
],
"section_name": [
"Introduction",
"Layer-wise Training with Soft Target and Hard Target",
"Differential Saturation Check",
"Sequence Discriminative Training",
"Parallel Training",
"Deployment",
"Distillation",
"Transfer Learning with sMBR",
"Training Data",
"Experimental setup",
"Results",
"Layer-wise Training",
"Transfer Learning",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"86877f451ae493ab86f280855917829c75d0b37a",
"bbf186fe0f9d0adfa81f290f25337452149a6f4f"
],
"answer": [
{
"evidence": [
"In this paper, we explore a entire deep LSTM RNN training framework, and employ it to real-time application. The deep learning systems benefit highly from a large quantity of labeled training data. Our first and basic speech recognition system is trained on 17000 hours of Shenma voice search dataset. It is a generic dataset sampled from diverse aspects of search queries. The requirement of speech recognition system also addressed by specific scenario, such as map and navigation task. The labeled dataset is too expensive, and training a new model with new large dataset from the beginning costs lots of time. Thus, it is natural to think of transferring the knowledge from basic model to new scenario's model. Transfer learning expends less data and less training time than full training. In this paper, we also introduce a novel transfer learning strategy with segmental Minimum Bayes-Risk (sMBR). As a result, transfer training with only 1000 hours data can match equivalent performance for full training with 7300 hours data."
],
"extractive_spans": [
"1000 hours data"
],
"free_form_answer": "",
"highlighted_evidence": [
"As a result, transfer training with only 1000 hours data can match equivalent performance for full training with 7300 hours data."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Two dataset is divided into training set, validation set and test set separately, and the quantity of them is shown in Table TABREF10 . The three sets are split according to speakers, in order to avoid utterances of same speaker appearing in three sets simultaneously. The test sets of Shenma and Amap voice search are called Shenma Test and Amap Test."
],
"extractive_spans": [],
"free_form_answer": "23085 hours of data",
"highlighted_evidence": [
"Two dataset is divided into training set, validation set and test set separately, and the quantity of them is shown in Table TABREF10 . "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"1f88061179ecf64f25719c3831b711cf81dea152",
"e79461377bda1499b21a5283c772d5c89dc5caa1"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 3. The CER and RTF of 9-layers, 2-layers regular-trained and 2-laryers distilled LSTM.",
"FLOAT SELECTED: Table 2. The CER of 6 to 9-layers models trained by regular Xavier Initialization, layer-wise training with CE criterion and CE + sMBR criteria. The teacher of 9-layer model is 8-layers sMBR model, while the others’ teacher is CE model.",
"FLOAT SELECTED: Table 4. The CER of different 2-layers models, which are Shenma distilled model, Amap model further trained with Amap dataset, and Shenma model trained with sMBR on Amap dataset."
],
"extractive_spans": [],
"free_form_answer": "2.49% for layer-wise training, 2.63% for distillation, 6.26% for transfer learning.",
"highlighted_evidence": [
"FLOAT SELECTED: Table 3. The CER and RTF of 9-layers, 2-layers regular-trained and 2-laryers distilled LSTM.",
"FLOAT SELECTED: Table 2. The CER of 6 to 9-layers models trained by regular Xavier Initialization, layer-wise training with CE criterion and CE + sMBR criteria. The teacher of 9-layer model is 8-layers sMBR model, while the others’ teacher is CE model.",
"FLOAT SELECTED: Table 4. The CER of different 2-layers models, which are Shenma distilled model, Amap model further trained with Amap dataset, and Shenma model trained with sMBR on Amap dataset."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 3. The CER and RTF of 9-layers, 2-layers regular-trained and 2-laryers distilled LSTM.",
"FLOAT SELECTED: Table 2. The CER of 6 to 9-layers models trained by regular Xavier Initialization, layer-wise training with CE criterion and CE + sMBR criteria. The teacher of 9-layer model is 8-layers sMBR model, while the others’ teacher is CE model."
],
"extractive_spans": [],
"free_form_answer": "Their best model achieved a 2.49% Character Error Rate.",
"highlighted_evidence": [
"FLOAT SELECTED: Table 3. The CER and RTF of 9-layers, 2-layers regular-trained and 2-laryers distilled LSTM.",
"FLOAT SELECTED: Table 2. The CER of 6 to 9-layers models trained by regular Xavier Initialization, layer-wise training with CE criterion and CE + sMBR criteria. The teacher of 9-layer model is 8-layers sMBR model, while the others’ teacher is CE model."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"40649abd8f85327047aa59c01c7b47a758f37c6d"
],
"answer": [
{
"evidence": [
"There is a high real time requirement in real world application, especially in online voice search system. Shenma voice search is one of the most popular mobile search engines in China, and it is a streaming service that intermediate recognition results displayed while users are still speaking. Unidirectional LSTM network is applied, rather than bidirectional one, because it is well suited to real-time streaming speech recognition.",
"FLOAT SELECTED: Table 3. The CER and RTF of 9-layers, 2-layers regular-trained and 2-laryers distilled LSTM.",
"FLOAT SELECTED: Table 2. The CER of 6 to 9-layers models trained by regular Xavier Initialization, layer-wise training with CE criterion and CE + sMBR criteria. The teacher of 9-layer model is 8-layers sMBR model, while the others’ teacher is CE model."
],
"extractive_spans": [],
"free_form_answer": "Unidirectional LSTM networks with 2, 6, 7, 8, and 9 layers.",
"highlighted_evidence": [
"Unidirectional LSTM network is applied, rather than bidirectional one, because it is well suited to real-time streaming speech recognition.",
"FLOAT SELECTED: Table 3. The CER and RTF of 9-layers, 2-layers regular-trained and 2-laryers distilled LSTM.",
"FLOAT SELECTED: Table 2. The CER of 6 to 9-layers models trained by regular Xavier Initialization, layer-wise training with CE criterion and CE + sMBR criteria. The teacher of 9-layer model is 8-layers sMBR model, while the others’ teacher is CE model."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
}
],
"nlp_background": [
"",
"",
""
],
"paper_read": [
"",
"",
""
],
"question": [
"how small of a dataset did they train on?",
"what was their character error rate?",
"which lstm models did they compare with?"
],
"question_id": [
"8060a773f6a136944f7b59758d08cc6f2a59693b",
"1bb7eb5c3d029d95d1abf9f2892c1ec7b6eef306",
"c0af8b7bf52dc15e0b33704822c4a34077e09cd1"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"",
"",
""
]
} | {
"caption": [
"Table 1. The time summation of different sets of Shenma voice search and Amap.",
"Fig. 1. Mesh AllReduce of model averaging and BMUF.",
"Table 3. The CER and RTF of 9-layers, 2-layers regular-trained and 2-laryers distilled LSTM.",
"Table 2. The CER of 6 to 9-layers models trained by regular Xavier Initialization, layer-wise training with CE criterion and CE + sMBR criteria. The teacher of 9-layer model is 8-layers sMBR model, while the others’ teacher is CE model.",
"Table 4. The CER of different 2-layers models, which are Shenma distilled model, Amap model further trained with Amap dataset, and Shenma model trained with sMBR on Amap dataset."
],
"file": [
"4-Table1-1.png",
"5-Figure1-1.png",
"6-Table3-1.png",
"6-Table2-1.png",
"6-Table4-1.png"
]
} | [
"how small of a dataset did they train on?",
"what was their character error rate?",
"which lstm models did they compare with?"
] | [
[
"1703.07090-Training Data-2",
"1703.07090-Introduction-8"
],
[
"1703.07090-6-Table3-1.png",
"1703.07090-6-Table2-1.png",
"1703.07090-6-Table4-1.png"
],
[
"1703.07090-6-Table3-1.png",
"1703.07090-6-Table2-1.png",
"1703.07090-Deployment-0"
]
] | [
"23085 hours of data",
"Their best model achieved a 2.49% Character Error Rate.",
"Unidirectional LSTM networks with 2, 6, 7, 8, and 9 layers."
] | 190 |
1810.04635 | Multimodal Speech Emotion Recognition Using Audio and Text | Speech emotion recognition is a challenging task, and extensive reliance has been placed on models that use audio features in building well-performing classifiers. In this paper, we propose a novel deep dual recurrent encoder model that utilizes text data and audio signals simultaneously to obtain a better understanding of speech data. As emotional dialogue is composed of sound and spoken content, our model encodes the information from audio and text sequences using dual recurrent neural networks (RNNs) and then combines the information from these sources to predict the emotion class. This architecture analyzes speech data from the signal level to the language level, and it thus utilizes the information within the data more comprehensively than models that focus on audio features. Extensive experiments are conducted to investigate the efficacy and properties of the proposed model. Our proposed model outperforms previous state-of-the-art methods in assigning data to one of four emotion categories (i.e., angry, happy, sad and neutral) when the model is applied to the IEMOCAP dataset, as reflected by accuracies ranging from 68.8% to 71.8%. | {
"paragraphs": [
[
"Recently, deep learning algorithms have successfully addressed problems in various fields, such as image classification, machine translation, speech recognition, text-to-speech generation and other machine learning related areas BIBREF0 , BIBREF1 , BIBREF2 . Similarly, substantial improvements in performance have been obtained when deep learning algorithms have been applied to statistical speech processing BIBREF3 . These fundamental improvements have led researchers to investigate additional topics related to human nature, which have long been objects of study. One such topic involves understanding human emotions and reflecting it through machine intelligence, such as emotional dialogue models BIBREF4 , BIBREF5 .",
"In developing emotionally aware intelligence, the very first step is building robust emotion classifiers that display good performance regardless of the application; this outcome is considered to be one of the fundamental research goals in affective computing BIBREF6 . In particular, the speech emotion recognition task is one of the most important problems in the field of paralinguistics. This field has recently broadened its applications, as it is a crucial factor in optimal human-computer interactions, including dialog systems. The goal of speech emotion recognition is to predict the emotional content of speech and to classify speech according to one of several labels (i.e., happy, sad, neutral, and angry). Various types of deep learning methods have been applied to increase the performance of emotion classifiers; however, this task is still considered to be challenging for several reasons. First, insufficient data for training complex neural network-based models are available, due to the costs associated with human involvement. Second, the characteristics of emotions must be learned from low-level speech signals. Feature-based models display limited skills when applied to this problem.",
"To overcome these limitations, we propose a model that uses high-level text transcription, as well as low-level audio signals, to utilize the information contained within low-resource datasets to a greater degree. Given recent improvements in automatic speech recognition (ASR) technology BIBREF7 , BIBREF2 , BIBREF8 , BIBREF9 , speech transcription can be carried out using audio signals with considerable skill. The emotional content of speech is clearly indicated by the emotion words contained in a sentence BIBREF10 , such as “lovely” and “awesome,” which carry strong emotions compared to generic (non-emotion) words, such as “person” and “day.” Thus, we hypothesize that the speech emotion recognition model will be benefit from the incorporation of high-level textual input.",
"In this paper, we propose a novel deep dual recurrent encoder model that simultaneously utilizes audio and text data in recognizing emotions from speech. Extensive experiments are conducted to investigate the efficacy and properties of the proposed model. Our proposed model outperforms previous state-of-the-art methods by 68.8% to 71.8% when applied to the IEMOCAP dataset, which is one of the most well-studied datasets. Based on an error analysis of the models, we show that our proposed model accurately identifies emotion classes. Moreover, the neutral class misclassification bias frequently exhibited by previous models, which focus on audio features, is less pronounced in our model."
],
[
"Classical machine learning algorithms, such as hidden Markov models (HMMs), support vector machines (SVMs), and decision tree-based methods, have been employed in speech emotion recognition problems BIBREF11 , BIBREF12 , BIBREF13 . Recently, researchers have proposed various neural network-based architectures to improve the performance of speech emotion recognition. An initial study utilized deep neural networks (DNNs) to extract high-level features from raw audio data and demonstrated its effectiveness in speech emotion recognition BIBREF14 . With the advancement of deep learning methods, more complex neural-based architectures have been proposed. Convolutional neural network (CNN)-based models have been trained on information derived from raw audio signals using spectrograms or audio features such as Mel-frequency cepstral coefficients (MFCCs) and low-level descriptors (LLDs) BIBREF15 , BIBREF16 , BIBREF17 . These neural network-based models are combined to produce higher-complexity models BIBREF18 , BIBREF19 , and these models achieved the best-recorded performance when applied to the IEMOCAP dataset.",
"Another line of research has focused on adopting variant machine learning techniques combined with neural network-based models. One researcher utilized the multiobject learning approach and used gender and naturalness as auxiliary tasks so that the neural network-based model learned more features from a given dataset BIBREF20 . Another researcher investigated transfer learning methods, leveraging external data from related domains BIBREF21 .",
"As emotional dialogue is composed of sound and spoken content, researchers have also investigated the combination of acoustic features and language information, built belief network-based methods of identifying emotional key phrases, and assessed the emotional salience of verbal cues from both phoneme sequences and words BIBREF22 , BIBREF23 . However, none of these studies have utilized information from speech signals and text sequences simultaneously in an end-to-end learning neural network-based model to classify emotions."
],
[
"This section describes the methodologies that are applied to the speech emotion recognition task. We start by introducing the recurrent encoder model for the audio and text modalities individually. We then propose a multimodal approach that encodes both audio and textual information simultaneously via a dual recurrent encoder."
],
[
"Motivated by the architecture used in BIBREF24 , BIBREF25 , we build an audio recurrent encoder (ARE) to predict the class of a given audio signal. Once MFCC features have been extracted from an audio signal, a subset of the sequential features is fed into the RNN (i.e., gated recurrent units (GRUs)), which leads to the formation of the network's internal hidden state INLINEFORM0 to model the time series patterns. This internal hidden state is updated at each time step with the input data INLINEFORM1 and the hidden state of the previous time step INLINEFORM2 as follows: DISPLAYFORM0 ",
"where INLINEFORM0 is the RNN function with weight parameter INLINEFORM1 , INLINEFORM2 represents the hidden state at t- INLINEFORM3 time step, and INLINEFORM4 represents the t- INLINEFORM5 MFCC features in INLINEFORM6 . After encoding the audio signal INLINEFORM7 with the RNN, the last hidden state of the RNN, INLINEFORM8 , is considered to be the representative vector that contains all of the sequential audio data. This vector is then concatenated with another prosodic feature vector, INLINEFORM9 , to generate a more informative vector representation of the signal, INLINEFORM10 . The MFCC and the prosodic features are extracted from the audio signal using the openSMILE toolkit BIBREF26 , INLINEFORM11 , respectively. Finally, the emotion class is predicted by applying the softmax function to the vector INLINEFORM12 . For a given audio sample INLINEFORM13 , we assume that INLINEFORM14 is the true label vector, which contains all zeros but contains a one at the correct class, and INLINEFORM15 is the predicted probability distribution from the softmax layer. The training objective then takes the following form: DISPLAYFORM0 ",
"where INLINEFORM0 is the calculated representative vector of the audio signal with dimensionality INLINEFORM1 . The INLINEFORM2 and the bias INLINEFORM3 are learned model parameters. C is the total number of classes, and N is the total number of samples used in training. The upper part of Figure shows the architecture of the ARE model."
],
[
"We assume that speech transcripts can be extracted from audio signals with high accuracy, given the advancement of ASR technologies BIBREF7 . We attempt to use the processed textual information as another modality in predicting the emotion class of a given signal. To use textual information, a speech transcript is tokenized and indexed into a sequence of tokens using the Natural Language Toolkit (NLTK) BIBREF27 . Each token is then passed through a word-embedding layer that converts a word index to a corresponding 300-dimensional vector that contains additional contextual meaning between words. The sequence of embedded tokens is fed into a text recurrent encoder (TRE) in such a way that the audio MFCC features are encoded using the ARE represented by equation EQREF2 . In this case, INLINEFORM0 is the t- INLINEFORM1 embedded token from the text input. Finally, the emotion class is predicted from the last hidden state of the text-RNN using the softmax function.",
"We use the same training objective as the ARE model, and the predicted probability distribution for the target class is as follows: DISPLAYFORM0 ",
"where INLINEFORM0 is last hidden state of the text-RNN, INLINEFORM1 , and the INLINEFORM2 and bias INLINEFORM3 are learned model parameters. The lower part of Figure indicates the architecture of the TRE model."
],
[
"We present a novel architecture called the multimodal dual recurrent encoder (MDRE) to overcome the limitations of existing approaches. In this study, we consider multiple modalities, such as MFCC features, prosodic features and transcripts, which contain sequential audio information, statistical audio information and textual information, respectively. These types of data are the same as those used in the ARE and TRE cases. The MDRE model employs two RNNs to encode data from the audio signal and textual inputs independently. The audio-RNN encodes MFCC features from the audio signal using equation EQREF2 . The last hidden state of the audio-RNN is concatenated with the prosodic features to form the final vector representation INLINEFORM0 , and this vector is then passed through a fully connected neural network layer to form the audio encoding vector A. On the other hand, the text-RNN encodes the word sequence of the transcript using equation EQREF2 . The final hidden states of the text-RNN are also passed through another fully connected neural network layer to form a textual encoding vector T. Finally, the emotion class is predicted by applying the softmax function to the concatenation of the vectors A and T. We use the same training objective as the ARE model, and the predicted probability distribution for the target class is as follows: DISPLAYFORM0 ",
"where INLINEFORM0 is the feed-forward neural network with weight parameter INLINEFORM1 , and INLINEFORM2 , INLINEFORM3 are final encoding vectors from the audio-RNN and text-RNN, respectively. INLINEFORM4 and the bias INLINEFORM5 are learned model parameters."
],
[
"Inspired by the concept of the attention mechanism used in neural machine translation BIBREF28 , we propose a novel multimodal attention method to focus on the specific parts of a transcript that contain strong emotional information, conditioning on the audio information. Figure shows the architecture of the MDREA model. First, the audio data and text data are encoded with the audio-RNN and text-RNN using equation EQREF2 . We then consider the final audio encoding vector INLINEFORM0 as a context vector. As seen in equation EQREF9 , during each time step t, the dot product between the context vector e and the hidden state of the text-RNN at each t-th sequence INLINEFORM1 is evaluated to calculate a similarity score INLINEFORM2 . Using this score INLINEFORM3 as a weight parameter, the weighted sum of the sequences of the hidden state of the text-RNN, INLINEFORM4 , is calculated to generate an attention-application vector Z. This attention-application vector is concatenated with the final encoding vector of the audio-RNN INLINEFORM5 (equation EQREF7 ), which will be passed through the softmax function to predict the emotion class. We use the same training objective as the ARE model, and the predicted probability distribution for the target class is as follows: DISPLAYFORM0 ",
"where INLINEFORM0 and the bias INLINEFORM1 are learned model parameters."
],
[
"We evaluate our model using the Interactive Emotional Dyadic Motion Capture (IEMOCAP) BIBREF18 dataset. This dataset was collected following theatrical theory in order to simulate natural dyadic interactions between actors. We use categorical evaluations with majority agreement. We use only four emotional categories happy, sad, angry, and neutral to compare the performance of our model with other research using the same categories. The IEMOCAP dataset includes five sessions, and each session contains utterances from two speakers (one male and one female). This data collection process resulted in 10 unique speakers. For consistent comparison with previous work, we merge the excitement dataset with the happiness dataset. The final dataset contains a total of 5531 utterances (1636 happy, 1084 sad, 1103 angry, 1708 neutral)."
],
[
"To extract speech information from audio signals, we use MFCC values, which are widely used in analyzing audio signals. The MFCC feature set contains a total of 39 features, which include 12 MFCC parameters (1-12) from the 26 Mel-frequency bands and log-energy parameters, 13 delta and 13 acceleration coefficients The frame size is set to 25 ms at a rate of 10 ms with the Hamming function. According to the length of each wave file, the sequential step of the MFCC features is varied. To extract additional information from the data, we also use prosodic features, which show effectiveness in affective computing. The prosodic features are composed of 35 features, which include the F0 frequency, the voicing probability, and the loudness contours. All of these MFCC and prosodic features are extracted from the data using the OpenSMILE toolkit BIBREF26 ."
],
[
"Among the variants of the RNN function, we use GRUs as they yield comparable performance to that of the LSTM and include a smaller number of weight parameters BIBREF29 . We use a max encoder step of 750 for the audio input, based on the implementation choices presented in BIBREF30 and 128 for the text input because it covers the maximum length of the transcripts. The vocabulary size of the dataset is 3,747, including the “_UNK_\" token, which represents unknown words, and the “_PAD_\" token, which is used to indicate padding information added while preparing mini-batch data. The number of hidden units and the number of layers in the RNN for each model (ARE, TRE, MDRE and MDREA) are selected based on extensive hyperparameter search experiments. The weights of the hidden units are initialized using orthogonal weights BIBREF31 ], and the text embedding layer is initialized from pretrained word-embedding vectors BIBREF32 .",
"In preparing the textual dataset, we first use the released transcripts of the IEMOCAP dataset for simplicity. To investigate the practical performance, we then process all of the IEMOCAP audio data using an ASR system (the Google Cloud Speech API) and retrieve the transcripts. The performance of the Google ASR system is reflected by its word error rate (WER) of 5.53%."
],
[
"As the dataset is not explicitly split beforehand into training, development, and testing sets, we perform 5-fold cross validation to determine the overall performance of the model. The data in each fold are split into training, development, and testing datasets (8:0.5:1.5, respectively). After training the model, we measure the weighted average precision (WAP) over the 5-fold dataset. We train and evaluate the model 10 times per fold, and the model performance is assessed in terms of the mean score and standard deviation.",
"We examine the WAP values, which are shown in Table 1. First, our ARE model shows the baseline performance because we use minimal audio features, such as the MFCC and prosodic features with simple architectures. On the other hand, the TRE model shows higher performance gain compared to the ARE. From this result, we note that textual data are informative in emotion prediction tasks, and the recurrent encoder model is effective in understanding these types of sequential data. Second, the newly proposed model, MDRE, shows a substantial performance gain. It thus achieves the state-of-the-art performance with a WAP value of 0.718. This result shows that multimodal information is a key factor in affective computing. Lastly, the attention model, MDREA, also outperforms the best existing research results (WAP 0.690 to 0.688) BIBREF19 . However, the MDREA model does not match the performance of the MDRE model, even though it utilizes a more complex architecture. We believe that this result arises because insufficient data are available to properly determine the complex model parameters in the MDREA model. Moreover, we presume that this model will show better performance when the audio signals are aligned with the textual sequence while applying the attention mechanism. We leave the implementation of this point as a future research direction.",
"To investigate the practical performance of the proposed models, we conduct further experiments with the ASR-processed transcript data (see “-ASR” models in Table ). The label accuracy of the processed transcripts is 5.53% WER. The TRE-ASR, MDRE-ASR and MDREA-ASR models reflect degraded performance compared to that of the TRE, MDRE and MDREA models. However, the performance of these models is still competitive; in particular, the MDRE-ASR model outperforms the previous best-performing model, 3CNN-LSTM10H (WAP 0.691 to 0.688)."
],
[
"We analyze the predictions of the ARE, TRE, and MDRE models. Figure shows the confusion matrix of each model. The ARE model (Fig. ) incorrectly classifies most instances of happy as neutral (43.51%); thus, it shows reduced accuracy (35.15%) in predicting the the happy class. Overall, most of the emotion classes are frequently confused with the neutral class. This observation is in line with the findings of BIBREF30 , who noted that the neutral class is located in the center of the activation-valence space, complicating its discrimination from the other classes.",
"Interestingly, the TRE model (Fig. ) shows greater prediction gains in predicting the happy class when compared to the ARE model (35.15% to 75.73%). This result seems plausible because the model can benefit from the differences among the distributions of words in happy and neutral expressions, which gives more emotional information to the model than that of the audio signal data. On the other hand, it is striking that the TRE model incorrectly predicts instances of the sad class as the happy class 16.20% of the time, even though these emotional states are opposites of one another.",
"The MDRE model (Fig. ) compensates for the weaknesses of the previous two models (ARE and TRE) and benefits from their strengths to a surprising degree. The values arranged along the diagonal axis show that all of the accuracies of the correctly predicted class have increased. Furthermore, the occurrence of the incorrect “sad-to-happy\" cases in the TRE model is reduced from 16.20% to 9.15%."
],
[
"In this paper, we propose a novel multimodal dual recurrent encoder model that simultaneously utilizes text data, as well as audio signals, to permit the better understanding of speech data. Our model encodes the information from audio and text sequences using dual RNNs and then combines the information from these sources using a feed-forward neural model to predict the emotion class. Extensive experiments show that our proposed model outperforms other state-of-the-art methods in classifying the four emotion categories, and accuracies ranging from 68.8% to 71.8% are obtained when the model is applied to the IEMOCAP dataset. In particular, it resolves the issue in which predictions frequently incorrectly yield the neutral class, as occurs in previous models that focus on audio features.",
"In the future work, we aim to extend the modalities to audio, text and video inputs. Furthermore, we plan to investigate the application of the attention mechanism to data derived from multiple modalities. This approach seems likely to uncover enhanced learning schemes that will increase performance in both speech emotion recognition and other multimodal classification tasks."
],
[
"K. Jung is with the Department of Electrical and Computer Engineering, ASRI, Seoul National University, Seoul, Korea. This work was supported by the Ministry of Trade, Industry & Energy (MOTIE, Korea) under Industrial Technology Innovation Program (No.10073144)."
]
],
"section_name": [
"Introduction",
"Related work",
"Model",
"Audio Recurrent Encoder (ARE)",
"Text Recurrent Encoder (TRE)",
"Multimodal Dual Recurrent Encoder (MDRE)",
"Multimodal Dual Recurrent Encoder with Attention (MDREA)",
"Dataset",
"Feature extraction",
"Implementation details",
"Performance evaluation",
"Error analysis",
"Conclusions",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"3dfac856c65ec2f57df666baf46c88214c740c76",
"dec777c635cfb2bc735d2b56fdf16e6c5e1fb38e"
],
"answer": [
{
"evidence": [
"We assume that speech transcripts can be extracted from audio signals with high accuracy, given the advancement of ASR technologies BIBREF7 . We attempt to use the processed textual information as another modality in predicting the emotion class of a given signal. To use textual information, a speech transcript is tokenized and indexed into a sequence of tokens using the Natural Language Toolkit (NLTK) BIBREF27 . Each token is then passed through a word-embedding layer that converts a word index to a corresponding 300-dimensional vector that contains additional contextual meaning between words. The sequence of embedded tokens is fed into a text recurrent encoder (TRE) in such a way that the audio MFCC features are encoded using the ARE represented by equation EQREF2 . In this case, INLINEFORM0 is the t- INLINEFORM1 embedded token from the text input. Finally, the emotion class is predicted from the last hidden state of the text-RNN using the softmax function."
],
"extractive_spans": [],
"free_form_answer": "They use text transcription.",
"highlighted_evidence": [
"We assume that speech transcripts can be extracted from audio signals with high accuracy, given the advancement of ASR technologies BIBREF7 ."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"To overcome these limitations, we propose a model that uses high-level text transcription, as well as low-level audio signals, to utilize the information contained within low-resource datasets to a greater degree. Given recent improvements in automatic speech recognition (ASR) technology BIBREF7 , BIBREF2 , BIBREF8 , BIBREF9 , speech transcription can be carried out using audio signals with considerable skill. The emotional content of speech is clearly indicated by the emotion words contained in a sentence BIBREF10 , such as “lovely” and “awesome,” which carry strong emotions compared to generic (non-emotion) words, such as “person” and “day.” Thus, we hypothesize that the speech emotion recognition model will be benefit from the incorporation of high-level textual input."
],
"extractive_spans": [],
"free_form_answer": "both",
"highlighted_evidence": [
"To overcome these limitations, we propose a model that uses high-level text transcription, as well as low-level audio signals, to utilize the information contained within low-resource datasets to a greater degree."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"20b4145b8cb9be791e6709f74020885bde003801"
],
"answer": [
{
"evidence": [
"We examine the WAP values, which are shown in Table 1. First, our ARE model shows the baseline performance because we use minimal audio features, such as the MFCC and prosodic features with simple architectures. On the other hand, the TRE model shows higher performance gain compared to the ARE. From this result, we note that textual data are informative in emotion prediction tasks, and the recurrent encoder model is effective in understanding these types of sequential data. Second, the newly proposed model, MDRE, shows a substantial performance gain. It thus achieves the state-of-the-art performance with a WAP value of 0.718. This result shows that multimodal information is a key factor in affective computing. Lastly, the attention model, MDREA, also outperforms the best existing research results (WAP 0.690 to 0.688) BIBREF19 . However, the MDREA model does not match the performance of the MDRE model, even though it utilizes a more complex architecture. We believe that this result arises because insufficient data are available to properly determine the complex model parameters in the MDREA model. Moreover, we presume that this model will show better performance when the audio signals are aligned with the textual sequence while applying the attention mechanism. We leave the implementation of this point as a future research direction."
],
"extractive_spans": [
"the attention model, MDREA, also outperforms the best existing research results (WAP 0.690 to 0.688)"
],
"free_form_answer": "",
"highlighted_evidence": [
"Lastly, the attention model, MDREA, also outperforms the best existing research results (WAP 0.690 to 0.688) BIBREF19 ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"bc5c887b6d29ddfea646a8b14819eb4b89607db8",
"c4a0aaa1c0c27a5658fb3ad30fbcf71a21ebe0c4"
],
"answer": [
{
"evidence": [
"In this paper, we propose a novel multimodal dual recurrent encoder model that simultaneously utilizes text data, as well as audio signals, to permit the better understanding of speech data. Our model encodes the information from audio and text sequences using dual RNNs and then combines the information from these sources using a feed-forward neural model to predict the emotion class. Extensive experiments show that our proposed model outperforms other state-of-the-art methods in classifying the four emotion categories, and accuracies ranging from 68.8% to 71.8% are obtained when the model is applied to the IEMOCAP dataset. In particular, it resolves the issue in which predictions frequently incorrectly yield the neutral class, as occurs in previous models that focus on audio features."
],
"extractive_spans": [
"combines the information from these sources using a feed-forward neural model"
],
"free_form_answer": "",
"highlighted_evidence": [
"Our model encodes the information from audio and text sequences using dual RNNs and then combines the information from these sources using a feed-forward neural model to predict the emotion class."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In this paper, we propose a novel multimodal dual recurrent encoder model that simultaneously utilizes text data, as well as audio signals, to permit the better understanding of speech data. Our model encodes the information from audio and text sequences using dual RNNs and then combines the information from these sources using a feed-forward neural model to predict the emotion class. Extensive experiments show that our proposed model outperforms other state-of-the-art methods in classifying the four emotion categories, and accuracies ranging from 68.8% to 71.8% are obtained when the model is applied to the IEMOCAP dataset. In particular, it resolves the issue in which predictions frequently incorrectly yield the neutral class, as occurs in previous models that focus on audio features."
],
"extractive_spans": [
"encodes the information from audio and text sequences using dual RNNs and then combines the information from these sources using a feed-forward neural model"
],
"free_form_answer": "",
"highlighted_evidence": [
"Our model encodes the information from audio and text sequences using dual RNNs and then combines the information from these sources using a feed-forward neural model to predict the emotion class. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"Do they use datasets with transcribed text or do they determine text from the audio?",
"By how much does their model outperform the state of the art results?",
"How do they combine audio and text sequences in their RNN?"
],
"question_id": [
"9de2f73a3db0c695e5e0f5a3d791fdc370b1df6e",
"e0122fc7b0143d5cbcda2120be87a012fb987627",
"5da26954fbcd3cf6a7dba9f8b3c9a4b0391f67d4"
],
"question_writer": [
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Fig. 1. Multimodal dual recurrent encoder. The upper part shows the ARE, which encodes audio signals, and the lower part shows the TRE, which encodes textual information.",
"Fig. 2. Architecture of the MDREA model. The weighted sum of the sequence of the hidden states of the text-RNN ht is taken using the attention weight at; at is calculated as the dot product of the final encoding vector of the audio-RNN e and ht.",
"Table 1. Model performance comparisons. The top 2 bestperforming models (according to the unweighted average recall) are marked in bold. The “-ASR” models are trained with processed transcripts from the Google Cloud Speech API.",
"Fig. 3. Confusion matrix of each model."
],
"file": [
"2-Figure1-1.png",
"3-Figure2-1.png",
"4-Table1-1.png",
"5-Figure3-1.png"
]
} | [
"Do they use datasets with transcribed text or do they determine text from the audio?"
] | [
[
"1810.04635-Text Recurrent Encoder (TRE)-0",
"1810.04635-Introduction-2"
]
] | [
"both"
] | 191 |
1707.03569 | Multitask Learning for Fine-Grained Twitter Sentiment Analysis | Traditional sentiment analysis approaches tackle problems like ternary (3-category) and fine-grained (5-category) classification by learning the tasks separately. We argue that such classification tasks are correlated and we propose a multitask approach based on a recurrent neural network that benefits by jointly learning them. Our study demonstrates the potential of multitask models on this type of problems and improves the state-of-the-art results in the fine-grained sentiment classification problem. | {
"paragraphs": [
[
"Automatic classification of sentiment has mainly focused on categorizing tweets in either two (binary sentiment analysis) or three (ternary sentiment analysis) categories BIBREF0 . In this work we study the problem of fine-grained sentiment classification where tweets are classified according to a five-point scale ranging from VeryNegative to VeryPositive. To illustrate this, Table TABREF3 presents examples of tweets associated with each of these categories. Five-point scales are widely adopted in review sites like Amazon and TripAdvisor, where a user's sentiment is ordered with respect to its intensity. From a sentiment analysis perspective, this defines a classification problem with five categories. In particular, Sebastiani et al. BIBREF1 defined such classification problems whose categories are explicitly ordered to be ordinal classification problems. To account for the ordering of the categories, learners are penalized according to how far from the true class their predictions are.",
"Although considering different scales, the various settings of sentiment classification are related. First, one may use the same feature extraction and engineering approaches to represent the text spans such as word membership in lexicons, morpho-syntactic statistics like punctuation or elongated word counts BIBREF2 , BIBREF3 . Second, one would expect that knowledge from one task can be transfered to the others and this would benefit the performance. Knowing that a tweet is “Positive” in the ternary setting narrows the classification decision between the VeryPositive and Positive categories in the fine-grained setting. From a research perspective this raises the question of whether and how one may benefit when tackling such related tasks and how one can transfer knowledge from one task to another during the training phase.",
"Our focus in this work is to exploit the relation between the sentiment classification settings and demonstrate the benefits stemming from combining them. To this end, we propose to formulate the different classification problems as a multitask learning problem and jointly learn them. Multitask learning BIBREF4 has shown great potential in various domains and its benefits have been empirically validated BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 using different types of data and learning approaches. An important benefit of multitask learning is that it provides an elegant way to access resources developed for similar tasks. By jointly learning correlated tasks, the amount of usable data increases. For instance, while for ternary classification one can label data using distant supervision with emoticons BIBREF9 , there is no straightforward way to do so for the fine-grained problem. However, the latter can benefit indirectly, if the ternary and fine-grained tasks are learned jointly.",
"The research question that the paper attempts to answer is the following: Can twitter sentiment classification problems, and fine-grained sentiment classification in particular, benefit from multitask learning? To answer the question, the paper brings the following two main contributions: (i) we show how jointly learning the ternary and fine-grained sentiment classification problems in a multitask setting improves the state-of-the-art performance, and (ii) we demonstrate that recurrent neural networks outperform models previously proposed without access to huge corpora while being flexible to incorporate different sources of data."
],
[
"In his work, Caruana BIBREF4 proposed a multitask approach in which a learner takes advantage of the multiplicity of interdependent tasks while jointly learning them. The intuition is that if the tasks are correlated, the learner can learn a model jointly for them while taking into account the shared information which is expected to improve its generalization ability. People express their opinions online on various subjects (events, products..), on several languages and in several styles (tweets, paragraph-sized reviews..), and it is exactly this variety that motivates the multitask approaches. Specifically for Twitter for instance, the different settings of classification like binary, ternary and fine-grained are correlated since their difference lies in the sentiment granularity of the classes which increases while moving from binary to fine-grained problems.",
"There are two main decisions to be made in our approach: the learning algorithm, which learns a decision function, and the data representation. With respect to the former, neural networks are particularly suitable as one can design architectures with different properties and arbitrary complexity. Also, as training neural network usually relies on back-propagation of errors, one can have shared parts of the network trained by estimating errors on the joint tasks and others specialized for particular tasks. Concerning the data representation, it strongly depends on the data type available. For the task of sentiment classification of tweets with neural networks, distributed embeddings of words have shown great potential. Embeddings are defined as low-dimensional, dense representations of words that can be obtained in an unsupervised fashion by training on large quantities of text BIBREF10 .",
"Concerning the neural network architecture, we focus on Recurrent Neural Networks (RNNs) that are capable of modeling short-range and long-range dependencies like those exhibited in sequence data of arbitrary length like text. While in the traditional information retrieval paradigm such dependencies are captured using INLINEFORM0 -grams and skip-grams, RNNs learn to capture them automatically BIBREF11 . To circumvent the problems with capturing long-range dependencies and preventing gradients from vanishing, the long short-term memory network (LSTM) was proposed BIBREF12 . In this work, we use an extended version of LSTM called bidirectional LSTM (biLSTM). While standard LSTMs access information only from the past (previous words), biLSTMs capture both past and future information effectively BIBREF13 , BIBREF11 . They consist of two LSTM networks, for propagating text forward and backwards with the goal being to capture the dependencies better. Indeed, previous work on multitask learning showed the effectiveness of biLSTMs in a variety of problems: BIBREF14 tackled sequence prediction, while BIBREF6 and BIBREF15 used biLSTMs for Named Entity Recognition and dependency parsing respectively.",
"Figure FIGREF2 presents the architecture we use for multitask learning. In the top-left of the figure a biLSTM network (enclosed by the dashed line) is fed with embeddings INLINEFORM0 that correspond to the INLINEFORM1 words of a tokenized tweet. Notice, as discussed above, the biLSTM consists of two LSTMs that are fed with the word sequence forward and backwards. On top of the biLSTM network one (or more) hidden layers INLINEFORM2 transform its output. The output of INLINEFORM3 is led to the softmax layers for the prediction step. There are INLINEFORM4 softmax layers and each is used for one of the INLINEFORM5 tasks of the multitask setting. In tasks such as sentiment classification, additional features like membership of words in sentiment lexicons or counts of elongated/capitalized words can be used to enrich the representation of tweets before the classification step BIBREF3 . The lower part of the network illustrates how such sources of information can be incorporated to the process. A vector “Additional Features” for each tweet is transformed from the hidden layer(s) INLINEFORM6 and then is combined by concatenation with the transformed biLSTM output in the INLINEFORM7 layer."
],
[
"Our goal is to demonstrate how multitask learning can be successfully applied on the task of sentiment classification of tweets. The particularities of tweets are to be short and informal text spans. The common use of abbreviations, creative language etc., makes the sentiment classification problem challenging. To validate our hypothesis, that learning the tasks jointly can benefit the performance, we propose an experimental setting where there are data from two different twitter sentiment classification problems: a fine-grained and a ternary. We consider the fine-grained task to be our primary task as it is more challenging and obtaining bigger datasets, e.g. by distant supervision, is not straightforward and, hence we report the performance achieved for this task.",
"Ternary and fine-grained sentiment classification were part of the SemEval-2016 “Sentiment Analysis in Twitter” task BIBREF16 . We use the high-quality datasets the challenge organizers released. The dataset for fine-grained classification is split in training, development, development_test and test parts. In the rest, we refer to these splits as train, development and test, where train is composed by the training and the development instances. Table TABREF7 presents an overview of the data. As discussed in BIBREF16 and illustrated in the Table, the fine-grained dataset is highly unbalanced and skewed towards the positive sentiment: only INLINEFORM0 of the training examples are labeled with one of the negative classes.",
"Feature representation We report results using two different feature sets. The first one, dubbed nbow, is a neural bag-of-words that uses text embeddings to generate low-dimensional, dense representations of the tweets. To construct the nbow representation, given the word embeddings dictionary where each word is associated with a vector, we apply the average compositional function that averages the embeddings of the words that compose a tweet. Simple compositional functions like average were shown to be robust and efficient in previous work BIBREF17 . Instead of training embeddings from scratch, we use the pre-trained on tweets GloVe embeddings of BIBREF10 . In terms of resources required, using only nbow is efficient as it does not require any domain knowledge. However, previous research on sentiment analysis showed that using extra resources, like sentiment lexicons, can benefit significantly the performance BIBREF3 , BIBREF2 . To validate this and examine at which extent neural networks and multitask learning benefit from such features we evaluate the models using an augmented version of nbow, dubbed nbow+. The feature space of the latter, is augmented using 1,368 extra features consisting mostly of counts of punctuation symbols ('!?#@'), emoticons, elongated words and word membership features in several sentiment lexicons. Due to space limitations, for a complete presentation of these features, we refer the interested reader to BIBREF2 , whose open implementation we used to extract them.",
"Evaluation measure To reproduce the setting of the SemEval challenges BIBREF16 , we optimize our systems using as primary measure the macro-averaged Mean Absolute Error ( INLINEFORM0 ) given by: INLINEFORM1 ",
"",
"where INLINEFORM0 is the number of categories, INLINEFORM1 is the set of instances whose true class is INLINEFORM2 , INLINEFORM3 is the true label of the instance INLINEFORM4 and INLINEFORM5 the predicted label. The measure penalizes decisions far from the true ones and is macro-averaged to account for the fact that the data are unbalanced. Complementary to INLINEFORM6 , we report the performance achieved on the micro-averaged INLINEFORM7 measure, which is a commonly used measure for classification.",
"The models To evaluate the multitask learning approach, we compared it with several other models. Support Vector Machines (SVMs) are maximum margin classification algorithms that have been shown to achieve competitive performance in several text classification problems BIBREF16 . SVM INLINEFORM0 stands for an SVM with linear kernel and an one-vs-rest approach for the multi-class problem. Also, SVM INLINEFORM1 is an SVM with linear kernel that employs the crammer-singer strategy BIBREF18 for the multi-class problem. Logistic regression (LR) is another type of linear classification method, with probabilistic motivation. Again, we use two types of Logistic Regression depending on the multi-class strategy: LR INLINEFORM2 that uses an one-vs-rest approach and multinomial Logistic Regression also known as the MaxEnt classifier that uses a multinomial criterion.",
"Both SVMs and LRs as discussed above treat the problem as a multi-class one, without considering the ordering of the classes. For these four models, we tuned the hyper-parameter INLINEFORM0 that controls the importance of the L INLINEFORM1 regularization part in the optimization problem with grid-search over INLINEFORM2 using 10-fold cross-validation in the union of the training and development data and then retrained the models with the selected values. Also, to account for the unbalanced classification problem we used class weights to penalize more the errors made on the rare classes. These weights were inversely proportional to the frequency of each class. For the four models we used the implementations of Scikit-learn BIBREF19 .",
"For multitask learning we use the architecture shown in Figure FIGREF2 , which we implemented with Keras BIBREF20 . The embeddings are initialized with the 50-dimensional GloVe embeddings while the output of the biLSTM network is set to dimension 50. The activation function of the hidden layers is the hyperbolic tangent. The weights of the layers were initialized from a uniform distribution, scaled as described in BIBREF21 . We used the Root Mean Square Propagation optimization method. We used dropout for regularizing the network. We trained the network using batches of 128 examples as follows: before selecting the batch, we perform a Bernoulli trial with probability INLINEFORM0 to select the task to train for. With probability INLINEFORM1 we pick a batch for the fine-grained sentiment classification problem, while with probability INLINEFORM2 we pick a batch for the ternary problem. As shown in Figure FIGREF2 , the error is backpropagated until the embeddings, that we fine-tune during the learning process. Notice also that the weights of the network until the layer INLINEFORM3 are shared and therefore affected by both tasks.",
"To tune the neural network hyper-parameters we used 5-fold cross validation. We tuned the probability INLINEFORM0 of dropout after the hidden layers INLINEFORM1 and for the biLSTM for INLINEFORM2 , the size of the hidden layer INLINEFORM3 and the probability INLINEFORM4 of the Bernoulli trials from INLINEFORM5 . During training, we monitor the network's performance on the development set and apply early stopping if the performance on the validation set does not improve for 5 consecutive epochs.",
"Experimental results Table TABREF9 illustrates the performance of the models for the different data representations. The upper part of the Table summarizes the performance of the baselines. The entry “Balikas et al.” stands for the winning system of the 2016 edition of the challenge BIBREF2 , which to the best of our knowledge holds the state-of-the-art. Due to the stochasticity of training the biLSTM models, we repeat the experiment 10 times and report the average and the standard deviation of the performance achieved.",
"Several observations can be made from the table. First notice that, overall, the best performance is achieved by the neural network architecture that uses multitask learning. This entails that the system makes use of the available resources efficiently and improves the state-of-the-art performance. In conjunction with the fact that we found the optimal probability INLINEFORM0 , this highlights the benefits of multitask learning over single task learning. Furthermore, as described above, the neural network-based models have only access to the training data as the development are hold for early stopping. On the other hand, the baseline systems were retrained on the union of the train and development sets. Hence, even with fewer resources available for training on the fine-grained problem, the neural networks outperform the baselines. We also highlight the positive effect of the additional features that previous research proposed. Adding the features both in the baselines and in the biLSTM-based architectures improves the INLINEFORM1 scores by several points.",
"Lastly, we compare the performance of the baseline systems with the performance of the state-of-the-art system of BIBREF2 . While BIBREF2 uses n-grams (and character-grams) with INLINEFORM0 , the baseline systems (SVMs, LRs) used in this work use the nbow+ representation, that relies on unigrams. Although they perform on par, the competitive performance of nbow highlights the potential of distributed representations for short-text classification. Further, incorporating structure and distributed representations leads to the gains of the biLSTM network, in the multitask and single task setting.",
"Similar observations can be drawn from Figure FIGREF10 that presents the INLINEFORM0 scores. Again, the biLSTM network with multitask learning achieves the best performance. It is also to be noted that although the two evaluation measures are correlated in the sense that the ranking of the models is the same, small differences in the INLINEFORM1 have large effect on the scores of the INLINEFORM2 measure."
],
[
"In this paper, we showed that by jointly learning the tasks of ternary and fine-grained classification with a multitask learning model, one can greatly improve the performance on the second. This opens several avenues for future research. Since sentiment is expressed in different textual types like tweets and paragraph-sized reviews, in different languages (English, German, ..) and in different granularity levels (binary, ternary,..) one can imagine multitask approaches that could benefit from combining such resources. Also, while we opted for biLSTM networks here, one could use convolutional neural networks or even try to combine different types of networks and tasks to investigate the performance effect of multitask learning. Lastly, while our approach mainly relied on the foundations of BIBREF4 , the internal mechanisms and the theoretical guarantees of multitask learning remain to be better understood."
],
[
"This work is partially supported by the CIFRE N 28/2015."
]
],
"section_name": [
"Introduction",
"Multitask Learning for Twitter Sentiment Classification",
"Experimental setup",
"Conclusion",
"Acknowledgements"
]
} | {
"answers": [
{
"annotation_id": [
"236be83a132622e7d8a91744227657e2bff3783c",
"e39c2b51856f3f84c8492ef1747636f62dbd9702"
],
"answer": [
{
"evidence": [
"Experimental results Table TABREF9 illustrates the performance of the models for the different data representations. The upper part of the Table summarizes the performance of the baselines. The entry “Balikas et al.” stands for the winning system of the 2016 edition of the challenge BIBREF2 , which to the best of our knowledge holds the state-of-the-art. Due to the stochasticity of training the biLSTM models, we repeat the experiment 10 times and report the average and the standard deviation of the performance achieved.",
"The models To evaluate the multitask learning approach, we compared it with several other models. Support Vector Machines (SVMs) are maximum margin classification algorithms that have been shown to achieve competitive performance in several text classification problems BIBREF16 . SVM INLINEFORM0 stands for an SVM with linear kernel and an one-vs-rest approach for the multi-class problem. Also, SVM INLINEFORM1 is an SVM with linear kernel that employs the crammer-singer strategy BIBREF18 for the multi-class problem. Logistic regression (LR) is another type of linear classification method, with probabilistic motivation. Again, we use two types of Logistic Regression depending on the multi-class strategy: LR INLINEFORM2 that uses an one-vs-rest approach and multinomial Logistic Regression also known as the MaxEnt classifier that uses a multinomial criterion.",
"For multitask learning we use the architecture shown in Figure FIGREF2 , which we implemented with Keras BIBREF20 . The embeddings are initialized with the 50-dimensional GloVe embeddings while the output of the biLSTM network is set to dimension 50. The activation function of the hidden layers is the hyperbolic tangent. The weights of the layers were initialized from a uniform distribution, scaled as described in BIBREF21 . We used the Root Mean Square Propagation optimization method. We used dropout for regularizing the network. We trained the network using batches of 128 examples as follows: before selecting the batch, we perform a Bernoulli trial with probability INLINEFORM0 to select the task to train for. With probability INLINEFORM1 we pick a batch for the fine-grained sentiment classification problem, while with probability INLINEFORM2 we pick a batch for the ternary problem. As shown in Figure FIGREF2 , the error is backpropagated until the embeddings, that we fine-tune during the learning process. Notice also that the weights of the network until the layer INLINEFORM3 are shared and therefore affected by both tasks.",
"FLOAT SELECTED: Table 3 The scores on MAEM for the systems. The best (lowest) score is shown in bold and is achieved in the multitask setting with the biLSTM architecture of Figure 1."
],
"extractive_spans": [
"SVMs",
"LR",
"BIBREF2"
],
"free_form_answer": "",
"highlighted_evidence": [
"Experimental results Table TABREF9 illustrates the performance of the models for the different data representations. The upper part of the Table summarizes the performance of the baselines. The entry “Balikas et al.” stands for the winning system of the 2016 edition of the challenge BIBREF2 , which to the best of our knowledge holds the state-of-the-art.",
"o evaluate the multitask learning approach, we compared it with several other models. Support Vector Machines (SVMs) are maximum margin classification algorithms that have been shown to achieve competitive performance in several text classification problems BIBREF16 . SVM INLINEFORM0 stands for an SVM with linear kernel and an one-vs-rest approach for the multi-class problem. Also, SVM INLINEFORM1 is an SVM with linear kernel that employs the crammer-singer strategy BIBREF18 for the multi-class problem. Logistic regression (LR) is another type of linear classification method, with probabilistic motivation. Again, we use two types of Logistic Regression depending on the multi-class strategy: LR INLINEFORM2 that uses an one-vs-rest approach and multinomial Logistic Regression also known as the MaxEnt classifier that uses a multinomial criterion.",
"For multitask learning we use the architecture shown in Figure FIGREF2 , which we implemented with Keras BIBREF20 . The embeddings are initialized with the 50-dimensional GloVe embeddings while the output of the biLSTM network is set to dimension 50. ",
"FLOAT SELECTED: Table 3 The scores on MAEM for the systems. The best (lowest) score is shown in bold and is achieved in the multitask setting with the biLSTM architecture of Figure 1."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The models To evaluate the multitask learning approach, we compared it with several other models. Support Vector Machines (SVMs) are maximum margin classification algorithms that have been shown to achieve competitive performance in several text classification problems BIBREF16 . SVM INLINEFORM0 stands for an SVM with linear kernel and an one-vs-rest approach for the multi-class problem. Also, SVM INLINEFORM1 is an SVM with linear kernel that employs the crammer-singer strategy BIBREF18 for the multi-class problem. Logistic regression (LR) is another type of linear classification method, with probabilistic motivation. Again, we use two types of Logistic Regression depending on the multi-class strategy: LR INLINEFORM2 that uses an one-vs-rest approach and multinomial Logistic Regression also known as the MaxEnt classifier that uses a multinomial criterion."
],
"extractive_spans": [
"SVM INLINEFORM0",
"SVM INLINEFORM1",
"LR INLINEFORM2",
"MaxEnt"
],
"free_form_answer": "",
"highlighted_evidence": [
" To evaluate the multitask learning approach, we compared it with several other models. Support Vector Machines (SVMs) are maximum margin classification algorithms that have been shown to achieve competitive performance in several text classification problems BIBREF16 . SVM INLINEFORM0 stands for an SVM with linear kernel and an one-vs-rest approach for the multi-class problem. Also, SVM INLINEFORM1 is an SVM with linear kernel that employs the crammer-singer strategy BIBREF18 for the multi-class problem. Logistic regression (LR) is another type of linear classification method, with probabilistic motivation. Again, we use two types of Logistic Regression depending on the multi-class strategy: LR INLINEFORM2 that uses an one-vs-rest approach and multinomial Logistic Regression also known as the MaxEnt classifier that uses a multinomial criterion."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"7e68608ebda5c00afedf94a68b6d3c51d8e3f334"
],
"answer": [
{
"evidence": [
"Concerning the neural network architecture, we focus on Recurrent Neural Networks (RNNs) that are capable of modeling short-range and long-range dependencies like those exhibited in sequence data of arbitrary length like text. While in the traditional information retrieval paradigm such dependencies are captured using INLINEFORM0 -grams and skip-grams, RNNs learn to capture them automatically BIBREF11 . To circumvent the problems with capturing long-range dependencies and preventing gradients from vanishing, the long short-term memory network (LSTM) was proposed BIBREF12 . In this work, we use an extended version of LSTM called bidirectional LSTM (biLSTM). While standard LSTMs access information only from the past (previous words), biLSTMs capture both past and future information effectively BIBREF13 , BIBREF11 . They consist of two LSTM networks, for propagating text forward and backwards with the goal being to capture the dependencies better. Indeed, previous work on multitask learning showed the effectiveness of biLSTMs in a variety of problems: BIBREF14 tackled sequence prediction, while BIBREF6 and BIBREF15 used biLSTMs for Named Entity Recognition and dependency parsing respectively.",
"The models To evaluate the multitask learning approach, we compared it with several other models. Support Vector Machines (SVMs) are maximum margin classification algorithms that have been shown to achieve competitive performance in several text classification problems BIBREF16 . SVM INLINEFORM0 stands for an SVM with linear kernel and an one-vs-rest approach for the multi-class problem. Also, SVM INLINEFORM1 is an SVM with linear kernel that employs the crammer-singer strategy BIBREF18 for the multi-class problem. Logistic regression (LR) is another type of linear classification method, with probabilistic motivation. Again, we use two types of Logistic Regression depending on the multi-class strategy: LR INLINEFORM2 that uses an one-vs-rest approach and multinomial Logistic Regression also known as the MaxEnt classifier that uses a multinomial criterion.",
"Experimental results Table TABREF9 illustrates the performance of the models for the different data representations. The upper part of the Table summarizes the performance of the baselines. The entry “Balikas et al.” stands for the winning system of the 2016 edition of the challenge BIBREF2 , which to the best of our knowledge holds the state-of-the-art. Due to the stochasticity of training the biLSTM models, we repeat the experiment 10 times and report the average and the standard deviation of the performance achieved.",
"FLOAT SELECTED: Table 3 The scores on MAEM for the systems. The best (lowest) score is shown in bold and is achieved in the multitask setting with the biLSTM architecture of Figure 1.",
"Evaluation measure To reproduce the setting of the SemEval challenges BIBREF16 , we optimize our systems using as primary measure the macro-averaged Mean Absolute Error ( INLINEFORM0 ) given by: INLINEFORM1"
],
"extractive_spans": [],
"free_form_answer": "They decrease MAE in 0.34",
"highlighted_evidence": [
"In this work, we use an extended version of LSTM called bidirectional LSTM (biLSTM)",
"The models To evaluate the multitask learning approach, we compared it with several other models. Support Vector Machines (SVMs) are maximum margin classification algorithms that have been shown to achieve competitive performance in several text classification problems BIBREF16",
"Also, SVM INLINEFORM1 is an SVM with linear kernel that employs the crammer-singer strategy BIBREF18 for the multi-class problem. Logistic regression (LR) is another type of linear classification method, with probabilistic motivation.",
"Experimental results Table TABREF9 illustrates the performance of the models for the different data representations. The upper part of the Table summarizes the performance of the baselines. The entry “Balikas et al.” stands for the winning system of the 2016 edition of the challenge BIBREF2 , which to the best of our knowledge holds the state-of-the-art. ",
"FLOAT SELECTED: Table 3 The scores on MAEM for the systems. The best (lowest) score is shown in bold and is achieved in the multitask setting with the biLSTM architecture of Figure 1.",
"To reproduce the setting of the SemEval challenges BIBREF16 , we optimize our systems using as primary measure the macro-averaged Mean Absolute Error ( INLINEFORM0 ) given by: INLINEFORM1"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
]
},
{
"annotation_id": [
"403234711fc0279d426151f1b0bd9e1ee057775c",
"ddcff40ac3fdcf1842519fff6a77cd3810d75b8d"
],
"answer": [
{
"evidence": [
"Ternary and fine-grained sentiment classification were part of the SemEval-2016 “Sentiment Analysis in Twitter” task BIBREF16 . We use the high-quality datasets the challenge organizers released. The dataset for fine-grained classification is split in training, development, development_test and test parts. In the rest, we refer to these splits as train, development and test, where train is composed by the training and the development instances. Table TABREF7 presents an overview of the data. As discussed in BIBREF16 and illustrated in the Table, the fine-grained dataset is highly unbalanced and skewed towards the positive sentiment: only INLINEFORM0 of the training examples are labeled with one of the negative classes."
],
"extractive_spans": [],
"free_form_answer": " high-quality datasets from SemEval-2016 “Sentiment Analysis in Twitter” task",
"highlighted_evidence": [
"Ternary and fine-grained sentiment classification were part of the SemEval-2016 “Sentiment Analysis in Twitter” task BIBREF16 . We use the high-quality datasets the challenge organizers released."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Ternary and fine-grained sentiment classification were part of the SemEval-2016 “Sentiment Analysis in Twitter” task BIBREF16 . We use the high-quality datasets the challenge organizers released. The dataset for fine-grained classification is split in training, development, development_test and test parts. In the rest, we refer to these splits as train, development and test, where train is composed by the training and the development instances. Table TABREF7 presents an overview of the data. As discussed in BIBREF16 and illustrated in the Table, the fine-grained dataset is highly unbalanced and skewed towards the positive sentiment: only INLINEFORM0 of the training examples are labeled with one of the negative classes."
],
"extractive_spans": [
" SemEval-2016 “Sentiment Analysis in Twitter”"
],
"free_form_answer": "",
"highlighted_evidence": [
"Ternary and fine-grained sentiment classification were part of the SemEval-2016 “Sentiment Analysis in Twitter” task BIBREF16 . We use the high-quality datasets the challenge organizers released."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
]
}
],
"nlp_background": [
"",
"",
""
],
"paper_read": [
"",
"",
""
],
"question": [
"What was the baseline?",
"By how much did they improve?",
"What dataset did they use?"
],
"question_id": [
"37edc25e39515ffc2d92115d2fcd9e6ceb18898b",
"e431661f17347607c3d3d9764928385a8f3d9650",
"876700622bd6811d903e65314ac75971bbe23dcc"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"",
"",
""
]
} | {
"caption": [
"Table 1 The example demonstrates the different levels of sentiment a tweet may convey. Also, note the Twitter-specific use of language and symbols.",
"Figure 1. The neural network architecture for multitask learning. The biLSTM output is transformed by the hidden layers H1, HM and is led to N output layers, one for each of the tasks. The lower part of the network can be used to incorporate additional information.",
"Table 2 Cardinality and class distributions of the datasets.",
"Table 3 The scores on MAEM for the systems. The best (lowest) score is shown in bold and is achieved in the multitask setting with the biLSTM architecture of Figure 1.",
"Figure 2. F1 scores using the nbow+ representations. The best performance is achieved with the multitask setting."
],
"file": [
"2-Table1-1.png",
"3-Figure1-1.png",
"3-Table2-1.png",
"5-Table3-1.png",
"5-Figure2-1.png"
]
} | [
"By how much did they improve?",
"What dataset did they use?"
] | [
[
"1707.03569-Multitask Learning for Twitter Sentiment Classification-2",
"1707.03569-Experimental setup-6",
"1707.03569-5-Table3-1.png",
"1707.03569-Experimental setup-10"
],
[
"1707.03569-Experimental setup-1"
]
] | [
"They decrease MAE in 0.34",
" high-quality datasets from SemEval-2016 “Sentiment Analysis in Twitter” task"
] | 192 |
2003.11563 | Cost-Sensitive BERT for Generalisable Sentence Classification with Imbalanced Data | The automatic identification of propaganda has gained significance in recent years due to technological and social changes in the way news is generated and consumed. That this task can be addressed effectively using BERT, a powerful new architecture which can be fine-tuned for text classification tasks, is not surprising. However, propaganda detection, like other tasks that deal with news documents and other forms of decontextualized social communication (e.g. sentiment analysis), inherently deals with data whose categories are simultaneously imbalanced and dissimilar. We show that BERT, while capable of handling imbalanced classes with no additional data augmentation, does not generalise well when the training and test data are sufficiently dissimilar (as is often the case with news sources, whose topics evolve over time). We show how to address this problem by providing a statistical measure of similarity between datasets and a method of incorporating cost-weighting into BERT when the training and test sets are dissimilar. We test these methods on the Propaganda Techniques Corpus (PTC) and achieve the second-highest score on sentence-level propaganda classification. | {
"paragraphs": [
[
"The challenges of imbalanced classification—in which the proportion of elements in each class for a classification task significantly differ—and of the ability to generalise on dissimilar data have remained important problems in Natural Language Processing (NLP) and Machine Learning in general. Popular NLP tasks including sentiment analysis, propaganda detection, and event extraction from social media are all examples of imbalanced classification problems. In each case the number of elements in one of the classes (e.g. negative sentiment, propagandistic content, or specific events discussed on social media, respectively) is significantly lower than the number of elements in the other classes.",
"The recently introduced BERT language model for transfer learning BIBREF0 uses a deep bidirectional transformer architecture to produce pre-trained context-dependent embeddings. It has proven to be powerful in solving many NLP tasks and, as we find, also appears to handle imbalanced classification well, thus removing the need to use standard methods of data augmentation to mitigate this problem (see Section SECREF11 for related work and Section SECREF16 for analysis).",
"BERT is credited with the ability to adapt to many tasks and data with very little training BIBREF0. However, we show that BERT fails to perform well when the training and test data are significantly dissimilar, as is the case with several tasks that deal with social and news data. In these cases, the training data is necessarily a subset of past data, while the model is likely to be used on future data which deals with different topics. This work addresses this problem by incorporating cost-sensitivity (Section SECREF19) into BERT.",
"We test these methods by participating in the Shared Task on Fine-Grained Propaganda Detection for the 2nd Workshop on NLP for Internet Freedom, for which we achieve the second rank on sentence-level classification of propaganda, confirming the importance of cost-sensitivity when the training and test sets are dissimilar."
],
[
"The term `propaganda' derives from propagare in post-classical Latin, as in “propagation of the faith\" BIBREF1, and thus has from the beginning been associated with an intentional and potentially multicast communication; only later did it become a pejorative term. It was pragmatically defined in the World War II era as “the expression of an opinion or an action by individuals or groups deliberately designed to influence the opinions or the actions of other individuals or groups with reference to predetermined ends\" BIBREF2.",
"For the philosopher and sociologist Jacques Ellul, however, in a society with mass communication, propaganda is inevitable and thus it is necessary to become more aware of it BIBREF3; but whether or not to classify a given strip of text as propaganda depends not just on its content but on its use on the part of both addressers and addressees BIBREF1, and this fact makes the automated detection of propaganda intrinsically challenging.",
"Despite this difficulty, interest in automatically detecting misinformation and/or propaganda has gained significance due to the exponential growth in online sources of information combined with the speed with which information is shared today. The sheer volume of social interactions makes it impossible to manually check the veracity of all information being shared. Automation thus remains a potentially viable method of ensuring that we continue to enjoy the benefits of a connected world without the spread of misinformation through either ignorance or malicious intent.",
"In the task introduced by BIBREF4, we are provided with articles tagged as propaganda at the sentence and fragment (or span) level and are tasked with making predictions on a development set followed by a final held-out test set. We note this gives us access to the articles in the development and test sets but not their labels.",
"We participated in this task under the team name ProperGander and were placed 2nd on the sentence level classification task where we make use of our methods of incorporating cost-sensitivity into BERT. We also participated in the fragment level task and were placed 7th. The significant contributions of this work are:",
"We show that common (`easy') methods of data augmentation for dealing with class imbalance do not improve base BERT performance.",
"We provide a statistical method of establishing the similarity of datasets.",
"We incorporate cost-sensitivity into BERT to enable models to adapt to dissimilar datasets.",
"We release all our program code on GitHub and Google Colaboratory, so that other researchers can benefit from this work."
],
[
"Most of the existing works on propaganda detection focus on identifying propaganda at the news article level, or even at the news outlet level with the assumption that each of the articles of the suspected propagandistic outlet are propaganda BIBREF5, BIBREF6.",
"Here we study two tasks that are more fine-grained, specifically propaganda detection at the sentence and phrase (fragment) levels BIBREF4. This fine-grained setup aims to train models that identify linguistic propaganda techniques rather than distinguishing between the article source styles.",
"BIBREF4 EMNLP19DaSanMartino were the first to propose this problem setup and release it as a shared task. Along with the released dataset, BIBREF4 proposed a multi-granularity neural network, which uses the deep bidirectional transformer architecture known as BERT, which features pre-trained context-dependent embeddings BIBREF0. Their system takes a joint learning approach to the sentence- and phrase-level tasks, concatenating the output representation of the less granular (sentence-level) task with the more fine-grained task using learned weights.",
"In this work we also take the BERT model as the basis of our approach and focus on the class imbalance as well as the lack of similarity between training and test data inherent to the task."
],
[
"A common issue for many Natural Language Processing (NLP) classification tasks is class imbalance, the situation where one of the class categories comprises a significantly larger proportion of the dataset than the other classes. It is especially prominent in real-world datasets and complicates classification when the identification of the minority class is of specific importance.",
"Models trained on the basis of minimising errors for imbalanced datasets tend to more frequently predict the majority class; achieving high accuracy in such cases can be misleading. Because of this, the macro-averaged F-score, chosen for this competition, is a more suitable metric as it weights the performance on each class equally.",
"As class imbalance is a widespread issue, multiple techniques have been developed that help alleviate it BIBREF7, BIBREF8, by either adjusting the model (e.g. changing the performance metric) or changing the data (e.g. oversampling the minority class or undersampling the majority class)."
],
[
"Cost-sensitive classification can be used when the “cost” of mislabelling one class is higher than that of mislabelling other classes BIBREF9, BIBREF10. For example, the real cost to a bank of miscategorising a large fraudulent transaction as authentic is potentially higher than miscategorising (perhaps only temporarily) a valid transaction as fraudulent. Cost-sensitive learning tackles the issue of class imbalance by changing the cost function of the model such that misclassification of training examples from the minority class carries more weight and is thus more `expensive'. This is achieved by simply multiplying the loss of each example by a certain factor. This cost-sensitive learning technique takes misclassification costs into account during model training, and does not modify the imbalanced data distribution directly."
],
[
"Common methods that tackle the problem of class imbalance by modifying the data to create balanced datasets are undersampling and oversampling. Undersampling randomly removes instances from the majority class and is only suitable for problems with an abundance of data. Oversampling means creating more minority class instances to match the size of the majority class. Oversampling methods range from simple random oversampling, i.e. repeating the training procedure on instances from the minority class, chosen at random, to the more complex, which involves constructing synthetic minority-class samples. Random oversampling is similar to cost-sensitive learning as repeating the sample several times makes the cost of its mis-classification grow proportionally. Kolomiyets et al. kolomiyets2011model, Zhang et al. zhang2015character, and Wang and Yang wang2015s perform data augmentation using synonym replacement, i.e. replacing random words in sentences with their synonyms or nearest-neighbor embeddings, and show its effectiveness on multiple tasks and datasets. Wei et al. wei2019eda provide a great overview of `easy' data augmentation (EDA) techniques for NLP, including synonym replacement as described above, and random deletion, i.e. removing words in the sentence at random with pre-defined probability. They show the effectiveness of EDA across five text classification tasks. However, they mention that EDA may not lead to substantial improvements when using pre-trained models. In this work we test this claim by comparing performance gains of using cost-sensitive learning versus two data augmentation methods, synonym replacement and random deletion, with a pre-trained BERT model.",
"More complex augmentation methods include back-translation BIBREF11, translational data augmentation BIBREF12, and noising BIBREF13, but these are out of the scope of this study."
],
[
"The Propaganda Techniques Corpus (PTC) dataset for the 2019 Shared Task on Fine-Grained Propaganda consists of a training set of 350 news articles, consisting of just over 16,965 total sentences, in which specifically propagandistic fragments have been manually spotted and labelled by experts. This is accompanied by a development set (or dev set) of 61 articles with 2,235 total sentences, whose labels are maintained by the shared task organisers; and two months after the release of this data, the organisers released a test set of 86 articles and 3,526 total sentences. In the training set, 4,720 ($\\sim 28\\%$) of the sentences have been assessed as containing propaganda, with 12,245 sentences ($\\sim 72 \\%$) as non-propaganda, demonstrating a clear class imbalance.",
"In the binary sentence-level classification (SLC) task, a model is trained to detect whether each and every sentence is either 'propaganda' or 'non-propaganda'; in the more challenging field-level classification (FLC) task, a model is trained to detect one of 18 possible propaganda technique types in spans of characters within sentences. These propaganda types are listed in BIBREF4 and range from those which might be recognisable at the lexical level (e.g. Name_Calling, Repetition), and those which would likely need to incorporate semantic understanding (Red_Herring, Straw_Man).",
"For several example sentences from a sample document annotated with fragment-level classifications (FLC) (Figure FIGREF13). The corresponding sentence-level classification (SLC) labels would indicate that sentences 3, 4, and 7 are 'propaganda' while the the other sentences are `non-propaganda'."
],
[
"One of the most interesting aspects of the data provided for this task is the notable difference between the training and the development/test sets. We emphasise that this difference is realistic and reflective of real world news data, in which major stories are often accompanied by the introduction of new terms, names, and even phrases. This is because the training data is a subset of past data while the model is to be used on future data which deals with different newsworthy topics.",
"We demonstrate this difference statistically by using a method for finding the similarity of corpora suggested by BIBREF14. We use the Wilcoxon signed-rank test BIBREF15 which compares the frequency counts of randomly sampled elements from different datasets to determine if those datasets have a statistically similar distribution of elements.",
"We implement this as follows. For each of the training, development and test sets, we extract all words (retaining the repeats) while ignoring a set of stopwords (identified through the Python Natural Language Toolkit). We then extract 10,000 samples (with replacements) for various pairs of these datasets (training, development, and test sets along with splits of each of these datasets). Finally, we use comparative word frequencies from the two sets to calculate the p-value using the Wilcoxon signed-rank test. Table TABREF15 provides the minimum and maximum p-values and their interpretations for ten such runs of each pair reported.",
"With p-value less than 0.05, we show that the train, development and test sets are self-similar and also significantly different from each other. In measuring self-similarity, we split each dataset after shuffling all sentences. While this comparison is made at the sentence level (as opposed to the article level), it is consistent with the granularity used for propaganda detection, which is also at the sentence level. We also perform measurements of self similarity after splitting the data at the article level and find that the conclusions of similarity between the sets hold with a p-value threshold of 0.001, where p-values for similarity between the training and dev/test sets are orders of magnitude lower compared to self-similarity. Since we use random sampling we run this test 10 times and present the both the maximum and minimum p-values. We include the similarity between 25% of a dataset and the remaining 75% of that set because that is the train/test ratio we use in our experiments, further described in our methodology (Section SECREF4).",
"This analysis shows that while all splits of each of the datasets are statistically similar, the training set (and the split of the training set that we use for experimentation) are significantly different from the development and test sets. While our analysis does show that the development and the test sets are dissimilar, we note (based on the p-values) that they are significantly more similar to each other than they are to the training set."
],
[
"We were provided with two tasks: (1) propaganda fragment-level identification (FLC) and (2) propagandistic sentence-level identification (SLC). While we develop systems for both tasks, our main focus is toward the latter. Given the differences between the training, development, and test sets, we focus on methods for generalising our models. We note that propaganda identification is, in general, an imbalanced binary classification problem as most sentences are not propagandistic.",
"Due to the non-deterministic nature of fast GPU computations, we run each of our models three times and report the average of these three runs through the rest of this section. When picking the model to use for our final submission, we pick the model that performs best on the development set.",
"When testing our models, we split the labelled training data into two non-overlapping parts: the first one, consisting of 75% of the training data is used to train models, whereas the other is used to test the effectiveness of the models. All models are trained and tested on the same split to ensure comparability. Similarly, to ensure that our models remain comparable, we continue to train on the same 75% of the training set even when testing on the development set.",
"Once the best model is found using these methods, we train that model on all of the training data available before then submitting the results on the development set to the leaderboard. These results are detailed in the section describing our results (Section SECREF5)."
],
[
"The sentence level classification task is an imbalanced binary classification problem that we address using BERT BIBREF0. We use BERTBASE, uncased, which consists of 12 self-attention layers, and returns a 768-dimension vector that representation a sentence. So as to make use of BERT for sentence classification, we include a fully connected layer on top of the BERT self-attention layers, which classifies the sentence embedding provided by BERT into the two classes of interest (propaganda or non-propaganda).",
"We attempt to exploit various data augmentation techniques to address the problem of class imbalance. Table TABREF17 shows the results of our experiments for different data augmentation techniques when, after shuffling the training data, we train the model on 75% of the training data and test it on the remaining 25% of the training data and the development data.",
"We observe that BERT without augmentation consistently outperforms BERT with augmentation in the experiments when the model is trained on 75% of the training data and evaluated on the rest, i.e trained and evaluated on similar data, coming from the same distribution. This is consistent with observations by Wei et al. wei2019eda that contextual word embeddings do not gain from data augmentation. The fact that we shuffle the training data prior to splitting it into training and testing subsets could imply that the model is learning to associate topic words, such as `Mueller', as propaganda. However, when we perform model evaluation using the development set, which is dissimilar to the training, we observe that synonym insertion and word dropping techniques also do not bring performance gains, while random oversampling increases performance over base BERT by 4%. Synonym insertion provides results very similar to base BERT, while random deletion harms model performance producing lower scores. We believe that this could be attributed to the fact that synonym insertion and random word dropping involve the introduction of noise to the data, while oversampling does not. As we are working with natural language data, this type of noise can in fact change the meaning of the sentence. Oversampling on the other hand purely increases the importance of the minority class by repeating training on the unchanged instances.",
"So as to better understand the aspects of oversampling that contribute to these gains, we perform a class-wise performance analysis of BERT with/without oversampling. The results of these experiments (Table TABREF18) show that oversampling increases the overall recall while maintaining precision. This is achieved by significantly improving the recall of the minority class (propaganda) at the cost of the recall of the majority class.",
"So far we have been able to establish that a) the training and test sets are dissimilar, thus requiring us to generalise our model, b) oversampling provides a method of generalisation, and c) oversampling does this while maintaining recall on the minority (and thus more interesting) class.",
"Given this we explore alternative methods of increasing minority class recall without a significant drop in precision. One such method is cost-sensitive classification, which differs from random oversampling in that it provides a more continuous-valued and consistent method of weighting samples of imbalanced training data; for example, random oversampling will inevitably emphasise some training instances at the expense of others. We detail our methods of using cost-sensitive classification in the next section. Further experiments with oversampling might have provided insights into the relationships between these methods, which we leave for future exploration."
],
[
"As discussed in Section SECREF10, cost-sensitive classification can be performed by weighting the cost function. We increase the weight of incorrectly labelling a propagandistic sentence by altering the cost function of the training of the final fully connected layer of our model previously described in Section SECREF16. We make these changes through the use of PyTorch BIBREF16 which calculates the cross-entropy loss for a single prediction $x$, an array where the $j^{th}$ element represents the models prediction for class $j$, labelled with the class $class$ as given by Equation DISPLAY_FORM20.",
"The cross-entropy loss given in Equation DISPLAY_FORM20 is modified to accommodate an array $weight$, the $i^{th}$ element of which represents the weight of the $i^{th}$ class, as described in Equation DISPLAY_FORM21.",
"Intuitively, we increase the cost of getting the classification of an “important” class wrong and corresponding decrees the cost of getting a less important class wrong. In our case, we increase the cost of mislabelling the minority class which is “propaganda” (as opposed to “non-propaganda”).",
"We expect the effect of this to be similar to that of oversampling, in that it is likely to enable us to increase the recall of the minority class thus resulting in the decrease in recall of the overall model while maintaining high precision. We reiterate that this specific change to a model results in increasing the model's ability to better identify elements belonging to the minority class in dissimilar datasets when using BERT.",
"We explore the validity of this by performing several experiments with different weights assigned to the minority class. We note that in our experiments use significantly higher weights than the weights proportional to class frequencies in the training data, that are common in literature BIBREF17. Rather than directly using the class proportions of the training set, we show that tuning weights based on performance on the development set is more beneficial. Figure FIGREF22 shows the results of these experiments wherein we are able to maintain the precision on the subset of the training set used for testing while reducing its recall and thus generalising the model. The fact that the model is generalising on a dissimilar dataset is confirmed by the increase in the development set F1 score. We note that the gains are not infinite and that a balance must be struck based on the amount of generalisation and the corresponding loss in accuracy. The exact weight to use for the best transfer of classification accuracy is related to the dissimilarity of that other dataset and hence is to be obtained experimentally through hyperparameter search. Our experiments showed that a value of 4 is best suited for this task.",
"We do not include the complete results of our experiments here due to space constraints but include them along with charts and program code on our project website. Based on this exploration we find that the best weights for this particular dataset are 1 for non-propaganda and 4 for propaganda and we use this to train the final model used to submit results to the leaderboard. We also found that adding Part of Speech tags and Named Entity information to BERT embeddings by concatenating these one-hot vectors to the BERT embeddings does not improve model performance. We describe these results in Section SECREF5."
],
[
"In addition to participating in the Sentence Level Classification task we also participate in the Fragment Level Classification task. We note that extracting fragments that are propagandistic is similar to the task of Named Entity Recognition, in that they are both span extraction tasks, and so use a BERT based model designed for this task - We build on the work by BIBREF18 which makes use of Continuous Random Field stacked on top of an LSTM to predict spans. This architecture is standard amongst state of the art models that perform span identification.",
"While the same span of text cannot have multiple named entity labels, it can have different propaganda labels. We get around this problem by picking one of the labels at random. Additionally, so as to speed up training, we only train our model on those sentences that contain some propagandistic fragment. In hindsight, we note that both these decisions were not ideal and discuss what we might have otherwise done in Section SECREF7."
],
[
"In this section, we show our rankings on the leaderboard on the test set. Unlike the previous exploratory sections, in which we trained our model on part of the training set, we train models described in this section on the complete training set."
],
[
"Our best performing model, selected on the basis of a systematic analysis of the relationship between cost weights and recall, places us second amongst the 25 teams that submitted their results on this task. We present our score on the test set alongside those of comparable teams in Table TABREF25. We note that the task description paper BIBREF4 describes a method of achieving an F1 score of 60.98% on a similar task although this reported score is not directly comparable to the results on this task because of the differences in testing sets."
],
[
"We train the model described in Section SECREF23 on the complete training set before submitting to the leaderboard. Our best performing model was placed 7th amongst the 13 teams that submitted results for this task. We present our score on the test set alongside those of comparable teams in Table TABREF27. We note that the task description paper BIBREF4 describes a method of achieving an F1 score of 22.58% on a similar task although, this reported score is not directly comparable to the results on this task.",
"One of the major setbacks to our method for identifying sentence fragments was the loss of training data as a result of randomly picking one label when the same fragment had multiple labels. This could have been avoided by training different models for each label and simply concatenating the results. Additionally, training on all sentences, including those that did not contain any fragments labelled as propagandistic would have likely improved our model performance. We intend to perform these experiments as part of our ongoing research."
],
[
"It is worth reflecting on the nature of the shared task dataset (PTC corpus) and its structural correspondence (or lack thereof) to some of the definitions of propaganda mentioned in the introduction. First, propaganda is a social phenomenon and takes place as an act of communication BIBREF19, and so it is more than a simple information-theoretic message of zeros and ones—it also incorporates an addresser and addressee(s), each in phatic contact (typically via broadcast media), ideally with a shared denotational code and contextual surround(s) BIBREF20.",
"As such, a dataset of decontextualised documents with labelled sentences, devoid of authorial or publisher metadata, has taken us at some remove from even a simple everyday definition of propaganda. Our models for this shared task cannot easily incorporate information about the addresser or addressee; are left to assume a shared denotational code between author and reader (one perhaps simulated with the use of pre-trained word embeddings); and they are unaware of when or where the act(s) of propagandistic communication took place. This slipperiness is illustrated in our example document (Fig. FIGREF13): note that while Sentences 3 and 7, labelled as propaganda, reflect a propagandistic attitude on the part of the journalist and/or publisher, Sentence 4—also labelled as propaganda in the training data—instead reflects a “flag-waving\" propagandistic attitude on the part of U.S. congressman Jeff Flake, via the conventions of reported speech BIBREF21. While reported speech often is signaled by specific morphosyntactic patterns (e.g. the use of double-quotes and “Flake said\") BIBREF22, we argue that human readers routinely distinguish propagandistic reportage from the propagandastic speech acts of its subjects, and to conflate these categories in a propaganda detection corpus may contribute to the occurrence of false positives/negatives."
],
[
"In this work we have presented a method of incorporating cost-sensitivity into BERT to allow for better generalisation and additionally, we provide a simple measure of corpus similarity to determine when this method is likely to be useful. We intend to extend our analysis of the ability to generalise models to less similar data by experimenting on other datasets and models. We hope that the release of program code and documentation will allow the research community to help in this experimentation while exploiting these methods."
],
[
"We would like to thank Dr Leandro Minku from the University of Birmingham for his insights into and help with the statistical analysis presented in this paper.",
"This work was also partially supported by The Alan Turing Institute under the EPSRC grant EP/N510129/1. Work by Elena Kochkina was partially supported by the Leverhulme Trust through the Bridges Programme and Warwick CDT for Urban Science & Progress under the EPSRC Grant Number EP/L016400/1."
]
],
"section_name": [
"Introduction",
"Introduction ::: Detecting Propaganda",
"Related work ::: Propaganda detection",
"Related work ::: Class imbalance",
"Related work ::: Class imbalance ::: Cost-sensitive learning",
"Related work ::: Class imbalance ::: Data augmentation",
"Dataset",
"Dataset ::: Data Distribution",
"Methodology",
"Methodology ::: Class Imbalance in Sentence Level Classification",
"Methodology ::: Cost-sensitive Classification",
"Methodology ::: Fragment-level classification (FLC)",
"Results",
"Results ::: Results on the SLC task",
"Results ::: Results on the FLC task",
"Issues of Decontextualization in Automated Propaganda Detection",
"Conclusions and Future Work",
"Acknowledgements"
]
} | {
"answers": [
{
"annotation_id": [
"6750250aedce5e946ce31318f2a22c5222313ad4",
"9516dd2520cfbcc7b191c5e9225396a21114aa70"
],
"answer": [
{
"evidence": [
"The term `propaganda' derives from propagare in post-classical Latin, as in “propagation of the faith\" BIBREF1, and thus has from the beginning been associated with an intentional and potentially multicast communication; only later did it become a pejorative term. It was pragmatically defined in the World War II era as “the expression of an opinion or an action by individuals or groups deliberately designed to influence the opinions or the actions of other individuals or groups with reference to predetermined ends\" BIBREF2."
],
"extractive_spans": [
"an intentional and potentially multicast communication",
"“the expression of an opinion or an action by individuals or groups deliberately designed to influence the opinions or the actions of other individuals or groups with reference to predetermined ends\""
],
"free_form_answer": "",
"highlighted_evidence": [
"The term `propaganda' derives from propagare in post-classical Latin, as in “propagation of the faith\" BIBREF1, and thus has from the beginning been associated with an intentional and potentially multicast communication; only later did it become a pejorative term. It was pragmatically defined in the World War II era as “the expression of an opinion or an action by individuals or groups deliberately designed to influence the opinions or the actions of other individuals or groups with reference to predetermined ends\" BIBREF2."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The term `propaganda' derives from propagare in post-classical Latin, as in “propagation of the faith\" BIBREF1, and thus has from the beginning been associated with an intentional and potentially multicast communication; only later did it become a pejorative term. It was pragmatically defined in the World War II era as “the expression of an opinion or an action by individuals or groups deliberately designed to influence the opinions or the actions of other individuals or groups with reference to predetermined ends\" BIBREF2.",
"For the philosopher and sociologist Jacques Ellul, however, in a society with mass communication, propaganda is inevitable and thus it is necessary to become more aware of it BIBREF3; but whether or not to classify a given strip of text as propaganda depends not just on its content but on its use on the part of both addressers and addressees BIBREF1, and this fact makes the automated detection of propaganda intrinsically challenging.",
"It is worth reflecting on the nature of the shared task dataset (PTC corpus) and its structural correspondence (or lack thereof) to some of the definitions of propaganda mentioned in the introduction. First, propaganda is a social phenomenon and takes place as an act of communication BIBREF19, and so it is more than a simple information-theoretic message of zeros and ones—it also incorporates an addresser and addressee(s), each in phatic contact (typically via broadcast media), ideally with a shared denotational code and contextual surround(s) BIBREF20.",
"FLOAT SELECTED: Figure 1: Excerpt of an example (truncated) news document with three separate field-level classification (FLC) tags, for LOADED LANGUAGE, FLAG-WAVING, AND WHATABOUTISM.",
"As such, a dataset of decontextualised documents with labelled sentences, devoid of authorial or publisher metadata, has taken us at some remove from even a simple everyday definition of propaganda. Our models for this shared task cannot easily incorporate information about the addresser or addressee; are left to assume a shared denotational code between author and reader (one perhaps simulated with the use of pre-trained word embeddings); and they are unaware of when or where the act(s) of propagandistic communication took place. This slipperiness is illustrated in our example document (Fig. FIGREF13): note that while Sentences 3 and 7, labelled as propaganda, reflect a propagandistic attitude on the part of the journalist and/or publisher, Sentence 4—also labelled as propaganda in the training data—instead reflects a “flag-waving\" propagandistic attitude on the part of U.S. congressman Jeff Flake, via the conventions of reported speech BIBREF21. While reported speech often is signaled by specific morphosyntactic patterns (e.g. the use of double-quotes and “Flake said\") BIBREF22, we argue that human readers routinely distinguish propagandistic reportage from the propagandastic speech acts of its subjects, and to conflate these categories in a propaganda detection corpus may contribute to the occurrence of false positives/negatives."
],
"extractive_spans": [
"First, propaganda is a social phenomenon and takes place as an act of communication BIBREF19, and so it is more than a simple information-theoretic message of zeros and ones—it also incorporates an addresser and addressee(s), each in phatic contact (typically via broadcast media), ideally with a shared denotational code and contextual surround(s) BIBREF20."
],
"free_form_answer": "",
"highlighted_evidence": [
"The term `propaganda' derives from propagare in post-classical Latin, as in “propagation of the faith\" BIBREF1, and thus has from the beginning been associated with an intentional and potentially multicast communication; only later did it become a pejorative term. It was pragmatically defined in the World War II era as “the expression of an opinion or an action by individuals or groups deliberately designed to influence the opinions or the actions of other individuals or groups with reference to predetermined ends\" BIBREF2.",
"For the philosopher and sociologist Jacques Ellul, however, in a society with mass communication, propaganda is inevitable and thus it is necessary to become more aware of it BIBREF3; but whether or not to classify a given strip of text as propaganda depends not just on its content but on its use on the part of both addressers and addressees BIBREF1, and this fact makes the automated detection of propaganda intrinsically challenging.",
"It is worth reflecting on the nature of the shared task dataset (PTC corpus) and its structural correspondence (or lack thereof) to some of the definitions of propaganda mentioned in the introduction. First, propaganda is a social phenomenon and takes place as an act of communication BIBREF19, and so it is more than a simple information-theoretic message of zeros and ones—it also incorporates an addresser and addressee(s), each in phatic contact (typically via broadcast media), ideally with a shared denotational code and contextual surround(s) BIBREF20.",
"FLOAT SELECTED: Figure 1: Excerpt of an example (truncated) news document with three separate field-level classification (FLC) tags, for LOADED LANGUAGE, FLAG-WAVING, AND WHATABOUTISM.",
"As such, a dataset of decontextualised documents with labelled sentences, devoid of authorial or publisher metadata, has taken us at some remove from even a simple everyday definition of propaganda. Our models for this shared task cannot easily incorporate information about the addresser or addressee; are left to assume a shared denotational code between author and reader (one perhaps simulated with the use of pre-trained word embeddings); and they are unaware of when or where the act(s) of propagandistic communication took place. This slipperiness is illustrated in our example document (Fig. FIGREF13): note that while Sentences 3 and 7, labelled as propaganda, reflect a propagandistic attitude on the part of the journalist and/or publisher, Sentence 4—also labelled as propaganda in the training data—instead reflects a “flag-waving\" propagandistic attitude on the part of U.S. congressman Jeff Flake, via the conventions of reported speech BIBREF21. While reported speech often is signaled by specific morphosyntactic patterns (e.g. the use of double-quotes and “Flake said\") BIBREF22, we argue that human readers routinely distinguish propagandistic reportage from the propagandastic speech acts of its subjects, and to conflate these categories in a propaganda detection corpus may contribute to the occurrence of false positives/negatives."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"cec1bdd89961f6aa1fea5dc517350e80a46ad452"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 2: F1 scores on an unseen (not used for training) part of the training set and the development set on BERT using different augmentation techniques.",
"FLOAT SELECTED: Table 3: Class-wise precision and recall with and without oversampling (OS) achieved on unseen part of the training set.",
"FLOAT SELECTED: Table 4: Our results on the SLC task (2nd, in bold) alongside comparable results from the competition leaderboard.",
"FLOAT SELECTED: Table 5: Our results on the FLC task (7th, in bold) alongside those of better performing teams from the competition leaderboard.",
"So as to better understand the aspects of oversampling that contribute to these gains, we perform a class-wise performance analysis of BERT with/without oversampling. The results of these experiments (Table TABREF18) show that oversampling increases the overall recall while maintaining precision. This is achieved by significantly improving the recall of the minority class (propaganda) at the cost of the recall of the majority class.",
"We explore the validity of this by performing several experiments with different weights assigned to the minority class. We note that in our experiments use significantly higher weights than the weights proportional to class frequencies in the training data, that are common in literature BIBREF17. Rather than directly using the class proportions of the training set, we show that tuning weights based on performance on the development set is more beneficial. Figure FIGREF22 shows the results of these experiments wherein we are able to maintain the precision on the subset of the training set used for testing while reducing its recall and thus generalising the model. The fact that the model is generalising on a dissimilar dataset is confirmed by the increase in the development set F1 score. We note that the gains are not infinite and that a balance must be struck based on the amount of generalisation and the corresponding loss in accuracy. The exact weight to use for the best transfer of classification accuracy is related to the dissimilarity of that other dataset and hence is to be obtained experimentally through hyperparameter search. Our experiments showed that a value of 4 is best suited for this task."
],
"extractive_spans": [
"precision",
"recall ",
"F1 score"
],
"free_form_answer": "",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: F1 scores on an unseen (not used for training) part of the training set and the development set on BERT using different augmentation techniques.",
"FLOAT SELECTED: Table 3: Class-wise precision and recall with and without oversampling (OS) achieved on unseen part of the training set.",
"FLOAT SELECTED: Table 4: Our results on the SLC task (2nd, in bold) alongside comparable results from the competition leaderboard.",
"FLOAT SELECTED: Table 5: Our results on the FLC task (7th, in bold) alongside those of better performing teams from the competition leaderboard.",
"o as to better understand the aspects of oversampling that contribute to these gains, we perform a class-wise performance analysis of BERT with/without oversampling. The results of these experiments (Table TABREF18) show that oversampling increases the overall recall while maintaining precision.",
"The fact that the model is generalising on a dissimilar dataset is confirmed by the increase in the development set F1 score."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"2e58b28a9ce1a307267f607bb4f1eddf301ec379",
"32a7e3a8e09eb3434ee4167344bb096ca9c5da23"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [
"The Propaganda Techniques Corpus (PTC) dataset for the 2019 Shared Task on Fine-Grained Propaganda consists of a training set of 350 news articles, consisting of just over 16,965 total sentences, in which specifically propagandistic fragments have been manually spotted and labelled by experts. This is accompanied by a development set (or dev set) of 61 articles with 2,235 total sentences, whose labels are maintained by the shared task organisers; and two months after the release of this data, the organisers released a test set of 86 articles and 3,526 total sentences. In the training set, 4,720 ($\\sim 28\\%$) of the sentences have been assessed as containing propaganda, with 12,245 sentences ($\\sim 72 \\%$) as non-propaganda, demonstrating a clear class imbalance.",
"FLOAT SELECTED: Figure 1: Excerpt of an example (truncated) news document with three separate field-level classification (FLC) tags, for LOADED LANGUAGE, FLAG-WAVING, AND WHATABOUTISM."
],
"extractive_spans": [],
"free_form_answer": "English",
"highlighted_evidence": [
"The Propaganda Techniques Corpus (PTC) dataset for the 2019 Shared Task on Fine-Grained Propaganda consists of a training set of 350 news articles, consisting of just over 16,965 total sentences, in which specifically propagandistic fragments have been manually spotted and labelled by experts.",
"FLOAT SELECTED: Figure 1: Excerpt of an example (truncated) news document with three separate field-level classification (FLC) tags, for LOADED LANGUAGE, FLAG-WAVING, AND WHATABOUTISM."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
],
"nlp_background": [
"two",
"two",
"two"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"How is \"propaganda\" defined for the purposes of this study?",
"What metrics are used in evaluation?",
"Which natural language(s) are studied in this paper?"
],
"question_id": [
"64af7f5c109ed10eda4fb1b70ecda21e6d5b96c8",
"b0a18628289146472aa42f992d0db85c200ec64b",
"72ce05546c81ada05885026470f4c8c218805055"
],
"question_writer": [
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe",
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe",
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Table 1: p-values representing the similarity between (parts of) the train, test and development sets.",
"Figure 1: Excerpt of an example (truncated) news document with three separate field-level classification (FLC) tags, for LOADED LANGUAGE, FLAG-WAVING, AND WHATABOUTISM.",
"Table 2: F1 scores on an unseen (not used for training) part of the training set and the development set on BERT using different augmentation techniques.",
"Table 3: Class-wise precision and recall with and without oversampling (OS) achieved on unseen part of the training set.",
"Figure 2: The impact of modifying the minority class weights on the performance on similar (subset of training set) and dissimilar (development) datasets. The method of increasing minority class weights is able to push the model towards generalisation while maintaining precision.",
"Table 4: Our results on the SLC task (2nd, in bold) alongside comparable results from the competition leaderboard.",
"Table 5: Our results on the FLC task (7th, in bold) alongside those of better performing teams from the competition leaderboard."
],
"file": [
"4-Table1-1.png",
"5-Figure1-1.png",
"5-Table2-1.png",
"6-Table3-1.png",
"7-Figure2-1.png",
"8-Table4-1.png",
"8-Table5-1.png"
]
} | [
"Which natural language(s) are studied in this paper?"
] | [
[
"2003.11563-Dataset-0",
"2003.11563-5-Figure1-1.png"
]
] | [
"English"
] | 195 |
1909.00105 | Generating Personalized Recipes from Historical User Preferences | Existing approaches to recipe generation are unable to create recipes for users with culinary preferences but incomplete knowledge of ingredients in specific dishes. We propose a new task of personalized recipe generation to help these users: expanding a name and incomplete ingredient details into complete natural-text instructions aligned with the user's historical preferences. We attend on technique- and recipe-level representations of a user's previously consumed recipes, fusing these 'user-aware' representations in an attention fusion layer to control recipe text generation. Experiments on a new dataset of 180K recipes and 700K interactions show our model's ability to generate plausible and personalized recipes compared to non-personalized baselines. | {
"paragraphs": [
[
"In the kitchen, we increasingly rely on instructions from cooking websites: recipes. A cook with a predilection for Asian cuisine may wish to prepare chicken curry, but may not know all necessary ingredients apart from a few basics. These users with limited knowledge cannot rely on existing recipe generation approaches that focus on creating coherent recipes given all ingredients and a recipe name BIBREF0. Such models do not address issues of personal preference (e.g. culinary tastes, garnish choices) and incomplete recipe details. We propose to approach both problems via personalized generation of plausible, user-specific recipes using user preferences extracted from previously consumed recipes.",
"Our work combines two important tasks from natural language processing and recommender systems: data-to-text generation BIBREF1 and personalized recommendation BIBREF2. Our model takes as user input the name of a specific dish, a few key ingredients, and a calorie level. We pass these loose input specifications to an encoder-decoder framework and attend on user profiles—learned latent representations of recipes previously consumed by a user—to generate a recipe personalized to the user's tastes. We fuse these `user-aware' representations with decoder output in an attention fusion layer to jointly determine text generation. Quantitative (perplexity, user-ranking) and qualitative analysis on user-aware model outputs confirm that personalization indeed assists in generating plausible recipes from incomplete ingredients.",
"While personalized text generation has seen success in conveying user writing styles in the product review BIBREF3, BIBREF4 and dialogue BIBREF5 spaces, we are the first to consider it for the problem of recipe generation, where output quality is heavily dependent on the content of the instructions—such as ingredients and cooking techniques.",
"To summarize, our main contributions are as follows:",
"We explore a new task of generating plausible and personalized recipes from incomplete input specifications by leveraging historical user preferences;",
"We release a new dataset of 180K+ recipes and 700K+ user reviews for this task;",
"We introduce new evaluation strategies for generation quality in instructional texts, centering on quantitative measures of coherence. We also show qualitatively and quantitatively that personalized models generate high-quality and specific recipes that align with historical user preferences."
],
[
"Large-scale transformer-based language models have shown surprising expressivity and fluency in creative and conditional long-text generation BIBREF6, BIBREF7. Recent works have proposed hierarchical methods that condition on narrative frameworks to generate internally consistent long texts BIBREF8, BIBREF9, BIBREF10. Here, we generate procedurally structured recipes instead of free-form narratives.",
"Recipe generation belongs to the field of data-to-text natural language generation BIBREF1, which sees other applications in automated journalism BIBREF11, question-answering BIBREF12, and abstractive summarization BIBREF13, among others. BIBREF14, BIBREF15 model recipes as a structured collection of ingredient entities acted upon by cooking actions. BIBREF0 imposes a `checklist' attention constraint emphasizing hitherto unused ingredients during generation. BIBREF16 attend over explicit ingredient references in the prior recipe step. Similar hierarchical approaches that infer a full ingredient list to constrain generation will not help personalize recipes, and would be infeasible in our setting due to the potentially unconstrained number of ingredients (from a space of 10K+) in a recipe. We instead learn historical preferences to guide full recipe generation.",
"A recent line of work has explored user- and item-dependent aspect-aware review generation BIBREF3, BIBREF4. This work is related to ours in that it combines contextual language generation with personalization. Here, we attend over historical user preferences from previously consumed recipes to generate recipe content, rather than writing styles."
],
[
"Our model's input specification consists of: the recipe name as a sequence of tokens, a partial list of ingredients, and a caloric level (high, medium, low). It outputs the recipe instructions as a token sequence: $\\mathcal {W}_r=\\lbrace w_{r,0}, \\dots , w_{r,T}\\rbrace $ for a recipe $r$ of length $T$. To personalize output, we use historical recipe interactions of a user $u \\in \\mathcal {U}$.",
"Encoder: Our encoder has three embedding layers: vocabulary embedding $\\mathcal {V}$, ingredient embedding $\\mathcal {I}$, and caloric-level embedding $\\mathcal {C}$. Each token in the (length $L_n$) recipe name is embedded via $\\mathcal {V}$; the embedded token sequence is passed to a two-layered bidirectional GRU (BiGRU) BIBREF17, which outputs hidden states for names $\\lbrace \\mathbf {n}_{\\text{enc},j} \\in \\mathbb {R}^{2d_h}\\rbrace $, with hidden size $d_h$. Similarly each of the $L_i$ input ingredients is embedded via $\\mathcal {I}$, and the embedded ingredient sequence is passed to another two-layered BiGRU to output ingredient hidden states as $\\lbrace \\mathbf {i}_{\\text{enc},j} \\in \\mathbb {R}^{2d_h}\\rbrace $. The caloric level is embedded via $\\mathcal {C}$ and passed through a projection layer with weights $W_c$ to generate calorie hidden representation $\\mathbf {c}_{\\text{enc}} \\in \\mathbb {R}^{2d_h}$.",
"Ingredient Attention: We apply attention BIBREF18 over the encoded ingredients to use encoder outputs at each decoding time step. We define an attention-score function $\\alpha $ with key $K$ and query $Q$:",
"with trainable weights $W_{\\alpha }$, bias $\\mathbf {b}_{\\alpha }$, and normalization term $Z$. At decoding time $t$, we calculate the ingredient context $\\mathbf {a}_{t}^{i} \\in \\mathbb {R}^{d_h}$ as:",
"Decoder: The decoder is a two-layer GRU with hidden state $h_t$ conditioned on previous hidden state $h_{t-1}$ and input token $w_{r, t}$ from the original recipe text. We project the concatenated encoder outputs as the initial decoder hidden state:",
"To bias generation toward user preferences, we attend over a user's previously reviewed recipes to jointly determine the final output token distribution. We consider two different schemes to model preferences from user histories: (1) recipe interactions, and (2) techniques seen therein (defined in data). BIBREF19, BIBREF20, BIBREF21 explore similar schemes for personalized recommendation.",
"Prior Recipe Attention: We obtain the set of prior recipes for a user $u$: $R^+_u$, where each recipe can be represented by an embedding from a recipe embedding layer $\\mathcal {R}$ or an average of the name tokens embedded by $\\mathcal {V}$. We attend over the $k$-most recent prior recipes, $R^{k+}_u$, to account for temporal drift of user preferences BIBREF22. These embeddings are used in the `Prior Recipe' and `Prior Name' models, respectively.",
"Given a recipe representation $\\mathbf {r} \\in \\mathbb {R}^{d_r}$ (where $d_r$ is recipe- or vocabulary-embedding size depending on the recipe representation) the prior recipe attention context $\\mathbf {a}_{t}^{r_u}$ is calculated as",
"Prior Technique Attention: We calculate prior technique preference (used in the `Prior Tech` model) by normalizing co-occurrence between users and techniques seen in $R^+_u$, to obtain a preference vector $\\rho _{u}$. Each technique $x$ is embedded via a technique embedding layer $\\mathcal {X}$ to $\\mathbf {x}\\in \\mathbb {R}^{d_x}$. Prior technique attention is calculated as",
"where, inspired by copy mechanisms BIBREF23, BIBREF24, we add $\\rho _{u,x}$ for technique $x$ to emphasize the attention by the user's prior technique preference.",
"Attention Fusion Layer: We fuse all contexts calculated at time $t$, concatenating them with decoder GRU output and previous token embedding:",
"We then calculate the token probability:",
"and maximize the log-likelihood of the generated sequence conditioned on input specifications and user preferences. fig:ex shows a case where the Prior Name model attends strongly on previously consumed savory recipes to suggest the usage of an additional ingredient (`cilantro')."
],
[
"We collect a novel dataset of 230K+ recipe texts and 1M+ user interactions (reviews) over 18 years (2000-2018) from Food.com. Here, we restrict to recipes with at least 3 steps, and at least 4 and no more than 20 ingredients. We discard users with fewer than 4 reviews, giving 180K+ recipes and 700K+ reviews, with splits as in tab:recipeixnstats.",
"Our model must learn to generate from a diverse recipe space: in our training data, the average recipe length is 117 tokens with a maximum of 256. There are 13K unique ingredients across all recipes. Rare words dominate the vocabulary: 95% of words appear $<$100 times, accounting for only 1.65% of all word usage. As such, we perform Byte-Pair Encoding (BPE) tokenization BIBREF25, BIBREF26, giving a training vocabulary of 15K tokens across 19M total mentions. User profiles are similarly diverse: 50% of users have consumed $\\le $6 recipes, while 10% of users have consumed $>$45 recipes.",
"We order reviews by timestamp, keeping the most recent review for each user as the test set, the second most recent for validation, and the remainder for training (sequential leave-one-out evaluation BIBREF27). We evaluate only on recipes not in the training set.",
"We manually construct a list of 58 cooking techniques from 384 cooking actions collected by BIBREF15; the most common techniques (bake, combine, pour, boil) account for 36.5% of technique mentions. We approximate technique adherence via string match between the recipe text and technique list."
],
[
"For training and evaluation, we provide our model with the first 3-5 ingredients listed in each recipe. We decode recipe text via top-$k$ sampling BIBREF7, finding $k=3$ to produce satisfactory results. We use a hidden size $d_h=256$ for both the encoder and decoder. Embedding dimensions for vocabulary, ingredient, recipe, techniques, and caloric level are 300, 10, 50, 50, and 5 (respectively). For prior recipe attention, we set $k=20$, the 80th %-ile for the number of user interactions. We use the Adam optimizer BIBREF28 with a learning rate of $10^{-3}$, annealed with a decay rate of 0.9 BIBREF29. We also use teacher-forcing BIBREF30 in all training epochs.",
"In this work, we investigate how leveraging historical user preferences can improve generation quality over strong baselines in our setting. We compare our personalized models against two baselines. The first is a name-based Nearest-Neighbor model (NN). We initially adapted the Neural Checklist Model of BIBREF0 as a baseline; however, we ultimately use a simple Encoder-Decoder baseline with ingredient attention (Enc-Dec), which provides comparable performance and lower complexity. All personalized models outperform baseline in BPE perplexity (tab:metricsontest) with Prior Name performing the best. While our models exhibit comparable performance to baseline in BLEU-1/4 and ROUGE-L, we generate more diverse (Distinct-1/2: percentage of distinct unigrams and bigrams) and acceptable recipes. BLEU and ROUGE are not the most appropriate metrics for generation quality. A `correct' recipe can be written in many ways with the same main entities (ingredients). As BLEU-1/4 capture structural information via n-gram matching, they are not correlated with subjective recipe quality. This mirrors observations from BIBREF31, BIBREF8.",
"We observe that personalized models make more diverse recipes than baseline. They thus perform better in BLEU-1 with more key entities (ingredient mentions) present, but worse in BLEU-4, as these recipes are written in a personalized way and deviate from gold on the phrasal level. Similarly, the `Prior Name' model generates more unigram-diverse recipes than other personalized models and obtains a correspondingly lower BLEU-1 score.",
"Qualitative Analysis: We present sample outputs for a cocktail recipe in tab:samplerecipes, and additional recipes in the appendix. Generation quality progressively improves from generic baseline output to a blended cocktail produced by our best performing model. Models attending over prior recipes explicitly reference ingredients. The Prior Name model further suggests the addition of lemon and mint, which are reasonably associated with previously consumed recipes like coconut mousse and pork skewers.",
"Personalization: To measure personalization, we evaluate how closely the generated text corresponds to a particular user profile. We compute the likelihood of generated recipes using identical input specifications but conditioned on ten different user profiles—one `gold' user who consumed the original recipe, and nine randomly generated user profiles. Following BIBREF8, we expect the highest likelihood for the recipe conditioned on the gold user. We measure user matching accuracy (UMA)—the proportion where the gold user is ranked highest—and Mean Reciprocal Rank (MRR) BIBREF32 of the gold user. All personalized models beat baselines in both metrics, showing our models personalize generated recipes to the given user profiles. The Prior Name model achieves the best UMA and MRR by a large margin, revealing that prior recipe names are strong signals for personalization. Moreover, the addition of attention mechanisms to capture these signals improves language modeling performance over a strong non-personalized baseline.",
"Recipe Level Coherence: A plausible recipe should possess a coherent step order, and we evaluate this via a metric for recipe-level coherence. We use the neural scoring model from BIBREF33 to measure recipe-level coherence for each generated recipe. Each recipe step is encoded by BERT BIBREF34. Our scoring model is a GRU network that learns the overall recipe step ordering structure by minimizing the cosine similarity of recipe step hidden representations presented in the correct and reverse orders. Once pretrained, our scorer calculates the similarity of a generated recipe to the forward and backwards ordering of its corresponding gold label, giving a score equal to the difference between the former and latter. A higher score indicates better step ordering (with a maximum score of 2). tab:coherencemetrics shows that our personalized models achieve average recipe-level coherence scores of 1.78-1.82, surpassing the baseline at 1.77.",
"Recipe Step Entailment: Local coherence is also crucial to a user following a recipe: it is crucial that subsequent steps are logically consistent with prior ones. We model local coherence as an entailment task: predicting the likelihood that a recipe step follows the preceding. We sample several consecutive (positive) and non-consecutive (negative) pairs of steps from each recipe. We train a BERT BIBREF34 model to predict the entailment score of a pair of steps separated by a [SEP] token, using the final representation of the [CLS] token. The step entailment score is computed as the average of scores for each set of consecutive steps in each recipe, averaged over every generated recipe for a model, as shown in tab:coherencemetrics.",
"Human Evaluation: We presented 310 pairs of recipes for pairwise comparison BIBREF8 (details in appendix) between baseline and each personalized model, with results shown in tab:metricsontest. On average, human evaluators preferred personalized model outputs to baseline 63% of the time, confirming that personalized attention improves the semantic plausibility of generated recipes. We also performed a small-scale human coherence survey over 90 recipes, in which 60% of users found recipes generated by personalized models to be more coherent and preferable to those generated by baseline models."
],
[
"In this paper, we propose a novel task: to generate personalized recipes from incomplete input specifications and user histories. On a large novel dataset of 180K recipes and 700K reviews, we show that our personalized generative models can generate plausible, personalized, and coherent recipes preferred by human evaluators for consumption. We also introduce a set of automatic coherence measures for instructional texts as well as personalization metrics to support our claims. Our future work includes generating structured representations of recipes to handle ingredient properties, as well as accounting for references to collections of ingredients (e.g. “dry mix\").",
"Acknowledgements. This work is partly supported by NSF #1750063. We thank all reviewers for their constructive suggestions, as well as Rei M., Sujoy P., Alicia L., Eric H., Tim S., Kathy C., Allen C., and Micah I. for their feedback."
],
[
"Our raw data consists of 270K recipes and 1.4M user-recipe interactions (reviews) scraped from Food.com, covering a period of 18 years (January 2000 to December 2018). See tab:int-stats for dataset summary statistics, and tab:samplegk for sample information about one user-recipe interaction and the recipe involved."
],
[
"See tab:samplechx for a sample recipe for chicken chili and tab:samplewaffle for a sample recipe for sweet waffles."
],
[
"We prepared a set of 15 pairwise comparisons per evaluation session, and collected 930 pairwise evaluations (310 per personalized model) over 62 sessions. For each pair, users were given a partial recipe specification (name and 3-5 key ingredients), as well as two generated recipes labeled `A' and `B'. One recipe is generated from our baseline encoder-decoder model and one recipe is generated by one of our three personalized models (Prior Tech, Prior Name, Prior Recipe). The order of recipe presentation (A/B) is randomly selected for each question. A screenshot of the user evaluation interface is given in fig:exeval. We ask the user to indicate which recipe they find more coherent, and which recipe best accomplishes the goal indicated by the recipe name. A screenshot of this survey interface is given in fig:exeval2."
]
],
"section_name": [
"Introduction",
"Related Work",
"Approach",
"Recipe Dataset: Food.com",
"Experiments and Results",
"Conclusion",
"Appendix ::: Food.com: Dataset Details",
"Appendix ::: Generated Examples",
"Human Evaluation"
]
} | {
"answers": [
{
"annotation_id": [
"37357a176531d9d68ea91109ce51b9e248e8c4e6",
"810f720f4678662a8ab6ceb6846249f0b5c73dad",
"b2da2703b50acf5aeca96fb1bac17e4224ba6605"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 2: Metrics on generated recipes from test set. D-1/2 = Distinct-1/2, UMA = User Matching Accuracy, MRR = Mean Reciprocal Rank, PP = Pairwise preference over baseline (evaluated for 310 recipe pairs per model).",
"In this work, we investigate how leveraging historical user preferences can improve generation quality over strong baselines in our setting. We compare our personalized models against two baselines. The first is a name-based Nearest-Neighbor model (NN). We initially adapted the Neural Checklist Model of BIBREF0 as a baseline; however, we ultimately use a simple Encoder-Decoder baseline with ingredient attention (Enc-Dec), which provides comparable performance and lower complexity. All personalized models outperform baseline in BPE perplexity (tab:metricsontest) with Prior Name performing the best. While our models exhibit comparable performance to baseline in BLEU-1/4 and ROUGE-L, we generate more diverse (Distinct-1/2: percentage of distinct unigrams and bigrams) and acceptable recipes. BLEU and ROUGE are not the most appropriate metrics for generation quality. A `correct' recipe can be written in many ways with the same main entities (ingredients). As BLEU-1/4 capture structural information via n-gram matching, they are not correlated with subjective recipe quality. This mirrors observations from BIBREF31, BIBREF8.",
"We observe that personalized models make more diverse recipes than baseline. They thus perform better in BLEU-1 with more key entities (ingredient mentions) present, but worse in BLEU-4, as these recipes are written in a personalized way and deviate from gold on the phrasal level. Similarly, the `Prior Name' model generates more unigram-diverse recipes than other personalized models and obtains a correspondingly lower BLEU-1 score.",
"Our model must learn to generate from a diverse recipe space: in our training data, the average recipe length is 117 tokens with a maximum of 256. There are 13K unique ingredients across all recipes. Rare words dominate the vocabulary: 95% of words appear $<$100 times, accounting for only 1.65% of all word usage. As such, we perform Byte-Pair Encoding (BPE) tokenization BIBREF25, BIBREF26, giving a training vocabulary of 15K tokens across 19M total mentions. User profiles are similarly diverse: 50% of users have consumed $\\le $6 recipes, while 10% of users have consumed $>$45 recipes.",
"Personalization: To measure personalization, we evaluate how closely the generated text corresponds to a particular user profile. We compute the likelihood of generated recipes using identical input specifications but conditioned on ten different user profiles—one `gold' user who consumed the original recipe, and nine randomly generated user profiles. Following BIBREF8, we expect the highest likelihood for the recipe conditioned on the gold user. We measure user matching accuracy (UMA)—the proportion where the gold user is ranked highest—and Mean Reciprocal Rank (MRR) BIBREF32 of the gold user. All personalized models beat baselines in both metrics, showing our models personalize generated recipes to the given user profiles. The Prior Name model achieves the best UMA and MRR by a large margin, revealing that prior recipe names are strong signals for personalization. Moreover, the addition of attention mechanisms to capture these signals improves language modeling performance over a strong non-personalized baseline."
],
"extractive_spans": [],
"free_form_answer": "Byte-Pair Encoding perplexity (BPE PPL),\nBLEU-1,\nBLEU-4,\nROUGE-L,\npercentage of distinct unigram (D-1),\npercentage of distinct bigrams(D-2),\nuser matching accuracy(UMA),\nMean Reciprocal Rank(MRR)\nPairwise preference over baseline(PP)",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: Metrics on generated recipes from test set. D-1/2 = Distinct-1/2, UMA = User Matching Accuracy, MRR = Mean Reciprocal Rank, PP = Pairwise preference over baseline (evaluated for 310 recipe pairs per model).",
" All personalized models outperform baseline in BPE perplexity (tab:metricsontest) with Prior Name performing the best. While our models exhibit comparable performance to baseline in BLEU-1/4 and ROUGE-L, we generate more diverse (Distinct-1/2: percentage of distinct unigrams and bigrams) and acceptable recipes. BLEU and ROUGE are not the most appropriate metrics for generation quality. A `correct' recipe can be written in many ways with the same main entities (ingredients). As BLEU-1/4 capture structural information via n-gram matching, they are not correlated with subjective recipe quality. This mirrors observations from BIBREF31, BIBREF8.\n\nWe observe that personalized models make more diverse recipes than ba",
"As such, we perform Byte-Pair Encoding (BPE) tokenization BIBREF25, BIBREF26, giving a training vocabulary of 15K tokens across 19M total mentions. ",
"We measure user matching accuracy (UMA)—the proportion where the gold user is ranked highest—and Mean Reciprocal Rank (MRR) BIBREF32 of the gold user."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In this work, we investigate how leveraging historical user preferences can improve generation quality over strong baselines in our setting. We compare our personalized models against two baselines. The first is a name-based Nearest-Neighbor model (NN). We initially adapted the Neural Checklist Model of BIBREF0 as a baseline; however, we ultimately use a simple Encoder-Decoder baseline with ingredient attention (Enc-Dec), which provides comparable performance and lower complexity. All personalized models outperform baseline in BPE perplexity (tab:metricsontest) with Prior Name performing the best. While our models exhibit comparable performance to baseline in BLEU-1/4 and ROUGE-L, we generate more diverse (Distinct-1/2: percentage of distinct unigrams and bigrams) and acceptable recipes. BLEU and ROUGE are not the most appropriate metrics for generation quality. A `correct' recipe can be written in many ways with the same main entities (ingredients). As BLEU-1/4 capture structural information via n-gram matching, they are not correlated with subjective recipe quality. This mirrors observations from BIBREF31, BIBREF8.",
"Personalization: To measure personalization, we evaluate how closely the generated text corresponds to a particular user profile. We compute the likelihood of generated recipes using identical input specifications but conditioned on ten different user profiles—one `gold' user who consumed the original recipe, and nine randomly generated user profiles. Following BIBREF8, we expect the highest likelihood for the recipe conditioned on the gold user. We measure user matching accuracy (UMA)—the proportion where the gold user is ranked highest—and Mean Reciprocal Rank (MRR) BIBREF32 of the gold user. All personalized models beat baselines in both metrics, showing our models personalize generated recipes to the given user profiles. The Prior Name model achieves the best UMA and MRR by a large margin, revealing that prior recipe names are strong signals for personalization. Moreover, the addition of attention mechanisms to capture these signals improves language modeling performance over a strong non-personalized baseline.",
"Recipe Level Coherence: A plausible recipe should possess a coherent step order, and we evaluate this via a metric for recipe-level coherence. We use the neural scoring model from BIBREF33 to measure recipe-level coherence for each generated recipe. Each recipe step is encoded by BERT BIBREF34. Our scoring model is a GRU network that learns the overall recipe step ordering structure by minimizing the cosine similarity of recipe step hidden representations presented in the correct and reverse orders. Once pretrained, our scorer calculates the similarity of a generated recipe to the forward and backwards ordering of its corresponding gold label, giving a score equal to the difference between the former and latter. A higher score indicates better step ordering (with a maximum score of 2). tab:coherencemetrics shows that our personalized models achieve average recipe-level coherence scores of 1.78-1.82, surpassing the baseline at 1.77."
],
"extractive_spans": [
"BLEU-1/4 and ROUGE-L",
"likelihood of generated recipes using identical input specifications but conditioned on ten different user profiles",
"user matching accuracy (UMA)",
"Mean Reciprocal Rank (MRR)",
"neural scoring model from BIBREF33 to measure recipe-level coherence"
],
"free_form_answer": "",
"highlighted_evidence": [
"While our models exhibit comparable performance to baseline in BLEU-1/4 and ROUGE-L, we generate more diverse (Distinct-1/2: percentage of distinct unigrams and bigrams) and acceptable recipes.",
"We compute the likelihood of generated recipes using identical input specifications but conditioned on ten different user profiles—one `gold' user who consumed the original recipe, and nine randomly generated user profiles.",
"We measure user matching accuracy (UMA)—the proportion where the gold user is ranked highest—and Mean Reciprocal Rank (MRR) BIBREF32 of the gold user.",
"We use the neural scoring model from BIBREF33 to measure recipe-level coherence for each generated recipe."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 2: Metrics on generated recipes from test set. D-1/2 = Distinct-1/2, UMA = User Matching Accuracy, MRR = Mean Reciprocal Rank, PP = Pairwise preference over baseline (evaluated for 310 recipe pairs per model)."
],
"extractive_spans": [],
"free_form_answer": " Distinct-1/2, UMA = User Matching Accuracy, MRR\n= Mean Reciprocal Rank, PP = Pairwise preference over baseline (evaluated for 310 recipe pairs per model)",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: Metrics on generated recipes from test set. D-1/2 = Distinct-1/2, UMA = User Matching Accuracy, MRR = Mean Reciprocal Rank, PP = Pairwise preference over baseline (evaluated for 310 recipe pairs per model)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"fa716cd87ce6fd6905e2f23f09b262e90413167f"
]
},
{
"annotation_id": [
"333eb3f140a618da478dbe407db4af9947ce4751",
"a6c5606084213eef853a7dbcc12a6f6f6e056f7e",
"e675948a12bb88c2aa16072193cc5f2891104bfb"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Figure 1: Sample data flow through model architecture. Emphasis on prior recipe attention scores (darker is stronger). Ingredient attention omitted for clarity.",
"FLOAT SELECTED: Figure 2: A sample question for pairwise evaluation survey."
],
"extractive_spans": [],
"free_form_answer": "English",
"highlighted_evidence": [
"FLOAT SELECTED: Figure 1: Sample data flow through model architecture. Emphasis on prior recipe attention scores (darker is stronger). Ingredient attention omitted for clarity.",
"FLOAT SELECTED: Figure 2: A sample question for pairwise evaluation survey."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Qualitative Analysis: We present sample outputs for a cocktail recipe in tab:samplerecipes, and additional recipes in the appendix. Generation quality progressively improves from generic baseline output to a blended cocktail produced by our best performing model. Models attending over prior recipes explicitly reference ingredients. The Prior Name model further suggests the addition of lemon and mint, which are reasonably associated with previously consumed recipes like coconut mousse and pork skewers."
],
"extractive_spans": [],
"free_form_answer": "English",
"highlighted_evidence": [
"Qualitative Analysis: We present sample outputs for a cocktail recipe in tab:samplerecipes, and additional recipes in the appendix."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"fa716cd87ce6fd6905e2f23f09b262e90413167f",
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"4958d5fd1ee864034d5daf13c2d6fc07055693f1"
],
"answer": [
{
"evidence": [
"Recipe Level Coherence: A plausible recipe should possess a coherent step order, and we evaluate this via a metric for recipe-level coherence. We use the neural scoring model from BIBREF33 to measure recipe-level coherence for each generated recipe. Each recipe step is encoded by BERT BIBREF34. Our scoring model is a GRU network that learns the overall recipe step ordering structure by minimizing the cosine similarity of recipe step hidden representations presented in the correct and reverse orders. Once pretrained, our scorer calculates the similarity of a generated recipe to the forward and backwards ordering of its corresponding gold label, giving a score equal to the difference between the former and latter. A higher score indicates better step ordering (with a maximum score of 2). tab:coherencemetrics shows that our personalized models achieve average recipe-level coherence scores of 1.78-1.82, surpassing the baseline at 1.77.",
"Human Evaluation: We presented 310 pairs of recipes for pairwise comparison BIBREF8 (details in appendix) between baseline and each personalized model, with results shown in tab:metricsontest. On average, human evaluators preferred personalized model outputs to baseline 63% of the time, confirming that personalized attention improves the semantic plausibility of generated recipes. We also performed a small-scale human coherence survey over 90 recipes, in which 60% of users found recipes generated by personalized models to be more coherent and preferable to those generated by baseline models."
],
"extractive_spans": [
"average recipe-level coherence scores of 1.78-1.82",
"human evaluators preferred personalized model outputs to baseline 63% of the time"
],
"free_form_answer": "",
"highlighted_evidence": [
"A higher score indicates better step ordering (with a maximum score of 2). tab:coherencemetrics shows that our personalized models achieve average recipe-level coherence scores of 1.78-1.82, surpassing the baseline at 1.77.",
"On average, human evaluators preferred personalized model outputs to baseline 63% of the time, confirming that personalized attention improves the semantic plausibility of generated recipes."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"b0270fa939e31d315bc96caa7ce8cb1e08feae7f"
],
"answer": [
{
"evidence": [
"In this work, we investigate how leveraging historical user preferences can improve generation quality over strong baselines in our setting. We compare our personalized models against two baselines. The first is a name-based Nearest-Neighbor model (NN). We initially adapted the Neural Checklist Model of BIBREF0 as a baseline; however, we ultimately use a simple Encoder-Decoder baseline with ingredient attention (Enc-Dec), which provides comparable performance and lower complexity. All personalized models outperform baseline in BPE perplexity (tab:metricsontest) with Prior Name performing the best. While our models exhibit comparable performance to baseline in BLEU-1/4 and ROUGE-L, we generate more diverse (Distinct-1/2: percentage of distinct unigrams and bigrams) and acceptable recipes. BLEU and ROUGE are not the most appropriate metrics for generation quality. A `correct' recipe can be written in many ways with the same main entities (ingredients). As BLEU-1/4 capture structural information via n-gram matching, they are not correlated with subjective recipe quality. This mirrors observations from BIBREF31, BIBREF8."
],
"extractive_spans": [
"name-based Nearest-Neighbor model (NN)",
"Encoder-Decoder baseline with ingredient attention (Enc-Dec)"
],
"free_form_answer": "",
"highlighted_evidence": [
"We compare our personalized models against two baselines. The first is a name-based Nearest-Neighbor model (NN). We initially adapted the Neural Checklist Model of BIBREF0 as a baseline; however, we ultimately use a simple Encoder-Decoder baseline with ingredient attention (Enc-Dec), which provides comparable performance and lower complexity."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"e05bda7bf1c14a1c22898e0af269b6603e1fc111"
],
"answer": [
{
"evidence": [
"We collect a novel dataset of 230K+ recipe texts and 1M+ user interactions (reviews) over 18 years (2000-2018) from Food.com. Here, we restrict to recipes with at least 3 steps, and at least 4 and no more than 20 ingredients. We discard users with fewer than 4 reviews, giving 180K+ recipes and 700K+ reviews, with splits as in tab:recipeixnstats."
],
"extractive_spans": [
"from Food.com"
],
"free_form_answer": "",
"highlighted_evidence": [
"We collect a novel dataset of 230K+ recipe texts and 1M+ user interactions (reviews) over 18 years (2000-2018) from Food.com."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"8c6b65060f2f8ce4b1096070787885aac45aefb5"
],
"answer": [
{
"evidence": [
"We collect a novel dataset of 230K+ recipe texts and 1M+ user interactions (reviews) over 18 years (2000-2018) from Food.com. Here, we restrict to recipes with at least 3 steps, and at least 4 and no more than 20 ingredients. We discard users with fewer than 4 reviews, giving 180K+ recipes and 700K+ reviews, with splits as in tab:recipeixnstats."
],
"extractive_spans": [
"from Food.com"
],
"free_form_answer": "",
"highlighted_evidence": [
"We collect a novel dataset of 230K+ recipe texts and 1M+ user interactions (reviews) over 18 years (2000-2018) from Food.com."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"two",
"two",
"",
"",
"",
""
],
"paper_read": [
"no",
"no",
"no",
"no",
"no",
"no"
],
"question": [
"What metrics are used for evaluation?",
"What natural language(s) are the recipes written in?",
"What were their results on the new dataset?",
"What are the baseline models?",
"How did they obtain the interactions?",
"Where do they get the recipes from?"
],
"question_id": [
"5b551ba47d582f2e6467b1b91a8d4d6a30c343ec",
"3cf1edfa6d53a236cf4258afd87c87c0a477e243",
"9bfebf8e5bc0bacf0af96a9a951eb7b96b359faa",
"34dc0838632d643f33c8dbfe7bd4b656586582a2",
"c77359fb9d3ef96965a9af0396b101f82a0a9de6",
"1bdc990c7e948724ab04e70867675a334fdd3051"
],
"question_writer": [
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe",
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"historical linguistics",
"historical linguistics",
"",
"",
"",
""
],
"topic_background": [
"familiar",
"familiar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Table 1: Statistics of Food.com interactions",
"Figure 1: Sample data flow through model architecture. Emphasis on prior recipe attention scores (darker is stronger). Ingredient attention omitted for clarity.",
"Table 2: Metrics on generated recipes from test set. D-1/2 = Distinct-1/2, UMA = User Matching Accuracy, MRR = Mean Reciprocal Rank, PP = Pairwise preference over baseline (evaluated for 310 recipe pairs per model).",
"Table 3: Sample generated recipe. Emphasis on personalization and explicit ingredient mentions via highlights.",
"Table 4: Coherence metrics on generated recipes from test set.",
"Table 5: Interaction statistics for Food.com dataset before and after data processing.",
"Table 6: Sample data from GeniusKitchen with recipe and user interaction details.",
"Figure 2: A sample question for pairwise evaluation survey.",
"Table 7: Sample generated recipe “Chicken Bell Pepper Chili Weight Watchers” for all models.",
"Table 8: Sample generated waffle recipe for all models.",
"Figure 3: A sample question for coherence evaluation survey."
],
"file": [
"3-Table1-1.png",
"3-Figure1-1.png",
"4-Table2-1.png",
"4-Table3-1.png",
"5-Table4-1.png",
"8-Table5-1.png",
"9-Table6-1.png",
"9-Figure2-1.png",
"10-Table7-1.png",
"11-Table8-1.png",
"12-Figure3-1.png"
]
} | [
"What metrics are used for evaluation?",
"What natural language(s) are the recipes written in?"
] | [
[
"1909.00105-Recipe Dataset: Food.com-1",
"1909.00105-Experiments and Results-4",
"1909.00105-4-Table2-1.png",
"1909.00105-Experiments and Results-1",
"1909.00105-Experiments and Results-5",
"1909.00105-Experiments and Results-2"
],
[
"1909.00105-9-Figure2-1.png",
"1909.00105-Experiments and Results-3",
"1909.00105-3-Figure1-1.png"
]
] | [
" Distinct-1/2, UMA = User Matching Accuracy, MRR\n= Mean Reciprocal Rank, PP = Pairwise preference over baseline (evaluated for 310 recipe pairs per model)",
"English"
] | 197 |
2001.02885 | Resolving the Scope of Speculation and Negation using Transformer-Based Architectures | Speculation is a naturally occurring phenomena in textual data, forming an integral component of many systems, especially in the biomedical information retrieval domain. Previous work addressing cue detection and scope resolution (the two subtasks of speculation detection) have ranged from rule-based systems to deep learning-based approaches. In this paper, we apply three popular transformer-based architectures, BERT, XLNet and RoBERTa to this task, on two publicly available datasets, BioScope Corpus and SFU Review Corpus, reporting substantial improvements over previously reported results (by at least 0.29 F1 points on cue detection and 4.27 F1 points on scope resolution). We also experiment with joint training of the model on multiple datasets, which outperforms the single dataset training approach by a good margin. We observe that XLNet consistently outperforms BERT and RoBERTa, contrary to results on other benchmark datasets. To confirm this observation, we apply XLNet and RoBERTa to negation detection and scope resolution, reporting state-of-the-art results on negation scope resolution for the BioScope Corpus (increase of 3.16 F1 points on the BioScope Full Papers, 0.06 F1 points on the BioScope Abstracts) and the SFU Review Corpus (increase of 0.3 F1 points). | {
"paragraphs": [
[
"The task of speculation detection and scope resolution is critical in distinguishing factual information from speculative information. This has multiple use-cases, like systems that determine the veracity of information, and those that involve requirement analysis. This task is particularly important to the biomedical domain, where patient reports and medical articles often use this feature of natural language. This task is commonly broken down into two subtasks: the first subtask, speculation cue detection, is to identify the uncertainty cue in a sentence, while the second subtask: scope resolution, is to identify the scope of that cue. For instance, consider the example:",
"It might rain tomorrow.",
"The speculation cue in the sentence above is ‘might’ and the scope of the cue ‘might’ is ‘rain tomorrow’. Thus, the speculation cue is the word that expresses the speculation, while the words affected by the speculation are in the scope of that cue.",
"This task was the CoNLL-2010 Shared Task (BIBREF0), which had 3 different subtasks. Task 1B was speculation cue detection on the BioScope Corpus, Task 1W was weasel identification from Wikipedia articles, and Task 2 was speculation scope resolution from the BioScope Corpus. For each task, the participants were provided the train and test set, which is henceforth referred to as Task 1B CoNLL and Task 2 CoNLL throughout this paper.",
"For our experimentation, we use the sub corpora of the BioScope Corpus (BIBREF1), namely the BioScope Abstracts sub corpora, which is referred to as BA, and the BioScope Full Papers sub corpora, which is referred to as BF. We also use the SFU Review Corpus (BIBREF2), which is referred to as SFU.",
"This subtask of natural language processing, along with another similar subtask, negation detection and scope resolution, have been the subject of a body of work over the years. The approaches used to solve them have evolved from simple rule-based systems (BIBREF3) based on linguistic information extracted from the sentences, to modern deep-learning based methods. The Machine Learning techniques used varied from Maximum Entropy Classifiers (BIBREF4) to Support Vector Machines (BIBREF5,BIBREF6,BIBREF7,BIBREF8), while the deep learning approaches included Recursive Neural Networks (BIBREF9,BIBREF10), Convolutional Neural Networks (BIBREF11) and most recently transfer learning-based architectures like Bidirectional Encoder Representation from Transformers (BERT) (BIBREF12). Figures FIGREF1 and FIGREF1 contain a summary of the papers addressing speculation detection and scope resolution (BIBREF13, BIBREF5, BIBREF9, BIBREF3, BIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF6, BIBREF11, BIBREF18, BIBREF10, BIBREF19, BIBREF7, BIBREF4, BIBREF8).",
"Inspired by the most recent approach of applying BERT to negation detection and scope resolution (BIBREF12), we take this approach one step further by performing a comparative analysis of three popular transformer-based architectures: BERT (BIBREF20), XLNet (BIBREF21) and RoBERTa (BIBREF22), applied to speculation detection and scope resolution. We evaluate the performance of each model across all datasets via the single dataset training approach, and report all scores including inter-dataset scores (i.e. train on one dataset, evaluate on another) to test the generalizability of the models. This approach outperforms all existing systems on the task of speculation detection and scope resolution. Further, we jointly train on multiple datasets and obtain improvements over the single dataset training approach on most datasets.",
"Contrary to results observed on benchmark GLUE tasks, we observe XLNet consistently outperforming RoBERTa. To confirm this observation, we apply these models to the negation detection and scope resolution task, and observe a continuity in this trend, reporting state-of-the-art results on three of four datasets on the negation scope resolution task.",
"The rest of the paper is organized as follows: In Section 2, we provide a detailed description of our methodology and elaborate on the experimentation details. In Section 3, we present our results and analysis on the speculation detection and scope resolution task, using the single dataset and the multiple dataset training approach. In Section 4, we show the results of applying XLNet and RoBERTa on negation detection and scope resolution and propose a few reasons to explain why XLNet performs better than RoBERTa. Finally, the future scope and conclusion is mentioned in Section 5."
],
[
"We use the methodology by Khandelwal and Sawant (BIBREF12), and modify it to support experimentation with multiple models.",
"For Speculation Cue Detection:",
"Input Sentence: It might rain tomorrow.",
"True Labels: Not-A-Cue, Cue, Not-A-Cue, Not-A-Cue.",
"First, this sentence is preprocessed to get the target labels as per the following annotation schema:",
"1 – Normal Cue 2 – Multiword Cue 3 – Not a cue 4 – Pad token",
"Thus, the preprocessed sequence is as follows:",
"Input Sentence: [It, might, rain, tomorrow]",
"True Labels: [3,1,3,3]",
"Then, we preprocess the input using the tokenizer for the model being used (BERT, XLNet or RoBERTa): splitting each word into one or more tokens, and converting each token to it’s corresponding tokenID, and padding it to the maximum input length of the model. Thus,",
"Input Sentence: [wtt(It), wtt(might), wtt(rain), wtt(tom), wtt(## or), wtt(## row), wtt(〈pad 〉),wtt(〈pad 〉)...]",
"True Labels: [3,1,3,3,3,3,4,4,4,4,...]",
"The word ‘tomorrow' has been split into 3 tokens, ‘tom', ‘##or' and ‘##row'. The function to convert the word to tokenID is represented by wtt.",
"For Speculation Scope Resolution:",
"If a sentence has multiple cues, each cue's scope will be resolved individually.",
"Input Sentence: It might rain tomorrow.",
"True Labels: Out-Of-Scope, Out-Of-Scope, In-Scope, In-Scope.",
"First, this sentence is preprocessed to get the target labels as per the following annotation schema:",
"0 – Out-Of-Scope 1 – In-Scope",
"Thus, the preprocessed sequence is as follows:",
"True Scope Labels: [0,0,1,1]",
"As for cue detection, we preprocess the input using the tokenizer for the model being used. Additionally, we need to indicate which cue's scope we want to find in the input sentence. We do this by inserting a special token representing the token type (according to the cue detection annotation schema) before the cue word whose scope is being resolved. Here, we want to find the scope of the cue ‘might'. Thus,",
"Input Sentence: [wtt(It), wtt(〈token[1]〉), wtt(might), wtt(rain), wtt(tom), wtt(##or), wtt(##row), wtt(〈pad〉), wtt(〈pad〉)...]",
"True Scope Labels: [0,0,1,1,1,1,0,0,0,0,...]",
"Now, the preprocessed input for cue detection and similarly for scope detection is fed as input to our model as follows:",
"X = Model (Input)",
"Y = W*X + b",
"The W matrix is a matrix of size n_hidden x num_classes (n_hidden is the size of the representation of a token within the model). These logits are fed to the loss function.",
"We use the following variants of each model:",
"BERT: bert-base-uncaseds3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased.tar.gz (The model used by BIBREF12)",
"RoBERTa: roberta-bases3.amazonaws.com/models.huggingface.co/bert/roberta-base-pytorch_model.bin (RoBERTa-base does not have an uncased variant)",
"XLNet: xlnet-base-caseds3.amazonaws.com/models.huggingface.co/bert/xlnet-base-cased-pytorch_model.bin (XLNet-base does not have an uncased variant)",
"The output of the model is a vector of probabilities per token. The loss is calculated for each token, by using the output vector and the true label for that token. We use class weights for the loss function, setting the weight for label 4 to 0 and all other labels to 1 (for cue detection only) to avoid training on padding token’s output.",
"We calculate the scores (Precision, Recall, F1) for the model per word of the input sentence, not per token that was fed to the model, as the tokens could be different for different models leading to inaccurate scores. For the above example, we calculate the output label for the word ‘tomorrow', not for each token it was split into (‘tom', ‘##or' and ‘##row'). To find the label for each word from the tokens it was split into, we experiment with 2 methods:",
"Average: We average the output vectors (softmax probabilities) for each token that the word was split into by the model's tokenizer. In the example above, we average the output of ‘tom', ‘##or' and ‘##row' to get the output for ‘tomorrow'. Then, we take an argmax over the resultant vector. This is then compared with the true label for the original word.",
"First Token: Here, we only consider the first token's probability vector (among all tokens the word was split into) as the output for that word, and get the label by an argmax over this vector. In the example above, we would consider the output vector corresponding to the token ‘tom' as the output for the word ‘tomorrow'.",
"For cue detection, the results are reported for the Average method only, while we report the scores for both Average and First Token for Scope Resolution.",
"For fair comparison, we use the same hyperparameters for the entire architecture for all 3 models. Only the tokenizer and the model are changed for each model. All other hyperparameters are kept same. We finetune the models for 60 epochs, using early stopping with a patience of 6 on the F1 score (word level) on the validation dataset. We use an initial learning rate of 3e-5, with a batch size of 8. We use the Categorical Cross Entropy loss function.",
"We use the Huggingface’s Pytorch Transformer library (BIBREF23) for the models and train all our models on Google Colaboratory."
],
[
"We use a default train-validation-test split of 70-15-15 for each dataset. For the speculation detection and scope resolution subtasks using single-dataset training, we report the results as an average of 5 runs of the model. For training the model on multiple datasets, we perform a 70-15-15 split of each training dataset, after which the train and validation part of the individual datasets are merged while the scores are reported for the test part of the individual datasets, which is not used for training or validation. We report the results as an average of 3 runs of the model. Figure FIGREF8 contains results for speculation cue detection and scope resolution when trained on a single dataset. All models perform the best when trained on the same dataset as they are evaluated on, except for BF, which gets the best results when trained on BA. This is because of the transfer learning capabilities of the models and the fact that BF is a smaller dataset than BA (BF: 2670 sentences, BA: 11871 sentences). For speculation cue detection, there is lesser generalizability for models trained on BF or BA, while there is more generalizability for models trained on SFU. This could be because of the different nature of the biomedical domain.",
"Figure FIGREF11 contains the results for speculation detection and scope resolution for models trained jointly on multiple datasets. We observe that training on multiple datasets helps the performance of all models on each dataset, as the quantity of data available to train the model increases. We also observe that XLNet consistently outperforms BERT and RoBERTa. To confirm this observation, we apply the 2 models to the related task of negation detection and scope resolution"
],
[
"We use a default train-validation-test split of 70-15-15 for each dataset, and use all 4 datasets (BF, BA, SFU and Sherlock). The results for BERT are taken from BIBREF12. The results for XLNet and RoBERTa are averaged across 5 runs for statistical significance. Figure FIGREF14 contains results for negation cue detection and scope resolution. We report state-of-the-art results on negation scope resolution on BF, BA and SFU datasets. Contrary to popular opinion, we observe that XLNet is better than RoBERTa for the cue detection and scope resolution tasks. A few possible reasons for this trend are:",
"Domain specificity, as both negation and speculation are closely related subtasks. Further experimentation on different tasks is needed to verify this.",
"Most benchmark tasks are sentence classification tasks, whereas the subtasks we experiment on are sequence labelling tasks. Given the pre-training objective of XLNet (training on permutations of the input), it may be able to capture long-term dependencies better, essential for sequence labelling tasks.",
"We work with the base variants of the models, while most results are reported with the large variants of the models."
],
[
"In this paper, we expanded on the work of Khandelwal and Sawant (BIBREF12) by looking at alternative transfer-learning models and experimented with training on multiple datasets. On the speculation detection task, we obtained a gain of 0.42 F1 points on BF, 1.98 F1 points on BA and 0.29 F1 points on SFU, while on the scope resolution task, we obtained a gain of 8.06 F1 points on BF, 4.27 F1 points on BA and 11.87 F1 points on SFU, when trained on a single dataset. While training on multiple datasets, we observed a gain of 10.6 F1 points on BF and 1.94 F1 points on BA on the speculation detection task and 2.16 F1 points on BF and 0.25 F1 points on SFU on the scope resolution task over the single dataset training approach. We thus significantly advance the state-of-the-art for speculation detection and scope resolution. On the negation scope resolution task, we applied the XLNet and RoBERTa and obtained a gain of 3.16 F1 points on BF, 0.06 F1 points on BA and 0.3 F1 points on SFU. Thus, we demonstrated the usefulness of transformer-based architectures in the field of negation and speculation detection and scope resolution. We believe that a larger and more general dataset would go a long way in bolstering future research and would help create better systems that are not domain-specific."
]
],
"section_name": [
"Introduction",
"Methodology and Experimental Setup",
"Results: Speculation Cue Detection and Scope Resolution",
"Negation Cue Detection and Scope Resolution",
"Conclusion and Future Scope"
]
} | {
"answers": [
{
"annotation_id": [
"b4fecbe78fead4a66c6dbbde5db0702bf850f7ab"
],
"answer": [
{
"evidence": [
"This subtask of natural language processing, along with another similar subtask, negation detection and scope resolution, have been the subject of a body of work over the years. The approaches used to solve them have evolved from simple rule-based systems (BIBREF3) based on linguistic information extracted from the sentences, to modern deep-learning based methods. The Machine Learning techniques used varied from Maximum Entropy Classifiers (BIBREF4) to Support Vector Machines (BIBREF5,BIBREF6,BIBREF7,BIBREF8), while the deep learning approaches included Recursive Neural Networks (BIBREF9,BIBREF10), Convolutional Neural Networks (BIBREF11) and most recently transfer learning-based architectures like Bidirectional Encoder Representation from Transformers (BERT) (BIBREF12). Figures FIGREF1 and FIGREF1 contain a summary of the papers addressing speculation detection and scope resolution (BIBREF13, BIBREF5, BIBREF9, BIBREF3, BIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF6, BIBREF11, BIBREF18, BIBREF10, BIBREF19, BIBREF7, BIBREF4, BIBREF8)."
],
"extractive_spans": [
"varied from Maximum Entropy Classifiers (BIBREF4) to Support Vector Machines (BIBREF5,BIBREF6,BIBREF7,BIBREF8)",
"Recursive Neural Networks (BIBREF9,BIBREF10), Convolutional Neural Networks (BIBREF11) and most recently transfer learning-based architectures like Bidirectional Encoder Representation from Transformers (BERT) (BIBREF12)"
],
"free_form_answer": "",
"highlighted_evidence": [
"The Machine Learning techniques used varied from Maximum Entropy Classifiers (BIBREF4) to Support Vector Machines (BIBREF5,BIBREF6,BIBREF7,BIBREF8), while the deep learning approaches included Recursive Neural Networks (BIBREF9,BIBREF10), Convolutional Neural Networks (BIBREF11) and most recently transfer learning-based architectures like Bidirectional Encoder Representation from Transformers (BERT) (BIBREF12). Figures FIGREF1 and FIGREF1 contain a summary of the papers addressing speculation detection and scope resolution (BIBREF13, BIBREF5, BIBREF9, BIBREF3, BIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF6, BIBREF11, BIBREF18, BIBREF10, BIBREF19, BIBREF7, BIBREF4, BIBREF8)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"35d9aafdc55acb13b07f3f01cec467db4ee4869e",
"d01104c1bdec03988e12750b362989c6afdb6f64"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Fig. 5: Results for Joint Training on Multiple Datasets"
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"FLOAT SELECTED: Fig. 5: Results for Joint Training on Multiple Datasets"
],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"f11abeadf264d790a6db62da6022c18d490ca40f",
"f387643d0cf4c1f6056ccff7bdce6d09cdcfbd34"
],
"answer": [
{
"evidence": [
"We use a default train-validation-test split of 70-15-15 for each dataset, and use all 4 datasets (BF, BA, SFU and Sherlock). The results for BERT are taken from BIBREF12. The results for XLNet and RoBERTa are averaged across 5 runs for statistical significance. Figure FIGREF14 contains results for negation cue detection and scope resolution. We report state-of-the-art results on negation scope resolution on BF, BA and SFU datasets. Contrary to popular opinion, we observe that XLNet is better than RoBERTa for the cue detection and scope resolution tasks. A few possible reasons for this trend are:"
],
"extractive_spans": [
"BF, BA, SFU and Sherlock"
],
"free_form_answer": "",
"highlighted_evidence": [
"We use a default train-validation-test split of 70-15-15 for each dataset, and use all 4 datasets (BF, BA, SFU and Sherlock)."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Fig. 5: Results for Joint Training on Multiple Datasets"
],
"extractive_spans": [],
"free_form_answer": "BioScope Abstracts, SFU, and BioScope Full Papers",
"highlighted_evidence": [
"FLOAT SELECTED: Fig. 5: Results for Joint Training on Multiple Datasets"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"3c81db6d27fdd2f99340a97fe13a1cf3be065de0"
],
"answer": [
{
"evidence": [
"This subtask of natural language processing, along with another similar subtask, negation detection and scope resolution, have been the subject of a body of work over the years. The approaches used to solve them have evolved from simple rule-based systems (BIBREF3) based on linguistic information extracted from the sentences, to modern deep-learning based methods. The Machine Learning techniques used varied from Maximum Entropy Classifiers (BIBREF4) to Support Vector Machines (BIBREF5,BIBREF6,BIBREF7,BIBREF8), while the deep learning approaches included Recursive Neural Networks (BIBREF9,BIBREF10), Convolutional Neural Networks (BIBREF11) and most recently transfer learning-based architectures like Bidirectional Encoder Representation from Transformers (BERT) (BIBREF12). Figures FIGREF1 and FIGREF1 contain a summary of the papers addressing speculation detection and scope resolution (BIBREF13, BIBREF5, BIBREF9, BIBREF3, BIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF6, BIBREF11, BIBREF18, BIBREF10, BIBREF19, BIBREF7, BIBREF4, BIBREF8).",
"FLOAT SELECTED: Fig. 1: Literature Review: Speculation Cue Detection"
],
"extractive_spans": [
"Figures FIGREF1 and FIGREF1 contain a summary of the papers addressing speculation detection and scope resolution"
],
"free_form_answer": "",
"highlighted_evidence": [
"Figures FIGREF1 and FIGREF1 contain a summary of the papers addressing speculation detection and scope resolution (BIBREF13, BIBREF5, BIBREF9, BIBREF3, BIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF6, BIBREF11, BIBREF18, BIBREF10, BIBREF19, BIBREF7, BIBREF4, BIBREF8).",
"FLOAT SELECTED: Fig. 1: Literature Review: Speculation Cue Detection"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"98b75bd4b067e9b2a9bf916a65a50931214239c5",
"a622e1b328d340926bcc13d5a4f19c1fe50bdd00"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"5c333ecdb187475cc29e6fdaed553f8708452b8a",
"765d60937ab33b35b6c3012f047041582de7e9f4"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"",
"",
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
"",
"",
""
],
"question": [
"What were the baselines?",
"Does RoBERTa outperform BERT?",
"Which multiple datasets did they train on during joint training?",
"What were the previously reported results?",
"What is the size of SFU Review corpus?",
"What is the size of bioScope corpus?"
],
"question_id": [
"78536da059b884d6ad04680baeb894895458055c",
"96b07373756d7854bccc3c12e8d41454ab8741f5",
"511517efc96edcd3e91e7783821c9d6d5a6562af",
"9122de265577e8f6b5160cd7d28be9e22da752b2",
"e86130c5b9ab28f0ec539c2bed1b1ae9efb99b7d",
"45be665a4504f0c7f458cf3f75a95d5a75eefd42"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
"",
"",
""
]
} | {
"caption": [
"Fig. 1: Literature Review: Speculation Cue Detection",
"Fig. 3: Our Approach",
"Fig. 4: Results for Single Dataset Training",
"Fig. 5: Results for Joint Training on Multiple Datasets",
"Fig. 6: Results for Negation Detection and Scope Resolution"
],
"file": [
"2-Figure1-1.png",
"4-Figure3-1.png",
"6-Figure4-1.png",
"6-Figure5-1.png",
"7-Figure6-1.png"
]
} | [
"Which multiple datasets did they train on during joint training?"
] | [
[
"2001.02885-Negation Cue Detection and Scope Resolution-0",
"2001.02885-6-Figure5-1.png"
]
] | [
"BioScope Abstracts, SFU, and BioScope Full Papers"
] | 198 |
1809.06537 | Automatic Judgment Prediction via Legal Reading Comprehension | Automatic judgment prediction aims to predict the judicial results based on case materials. It has been studied for several decades mainly by lawyers and judges, considered as a novel and prospective application of artificial intelligence techniques in the legal field. Most existing methods follow the text classification framework, which fails to model the complex interactions among complementary case materials. To address this issue, we formalize the task as Legal Reading Comprehension according to the legal scenario. Following the working protocol of human judges, LRC predicts the final judgment results based on three types of information, including fact description, plaintiffs' pleas, and law articles. Moreover, we propose a novel LRC model, AutoJudge, which captures the complex semantic interactions among facts, pleas, and laws. In experiments, we construct a real-world civil case dataset for LRC. Experimental results on this dataset demonstrate that our model achieves significant improvement over state-of-the-art models. We will publish all source codes and datasets of this work on \urlgithub.com for further research. | {
"paragraphs": [
[
"Automatic judgment prediction is to train a machine judge to determine whether a certain plea in a given civil case would be supported or rejected. In countries with civil law system, e.g. mainland China, such process should be done with reference to related law articles and the fact description, as is performed by a human judge. The intuition comes from the fact that under civil law system, law articles act as principles for juridical judgments. Such techniques would have a wide range of promising applications. On the one hand, legal consulting systems could provide better access to high-quality legal resources in a low-cost way to legal outsiders, who suffer from the complicated terminologies. On the other hand, machine judge assistants for professionals would help improve the efficiency of the judicial system. Besides, automated judgment system can help in improving juridical equality and transparency. From another perspective, there are currently 7 times much more civil cases than criminal cases in mainland China, with annual rates of increase of INLINEFORM0 and INLINEFORM1 respectively, making judgment prediction in civil cases a promising application BIBREF0 .",
"Previous works BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 formalize judgment prediction as the text classification task, regarding either charge names or binary judgments, i.e., support or reject, as the target classes. These works focus on the situation where only one result is expected, e.g., the US Supreme Court's decisions BIBREF2 , and the charge name prediction for criminal cases BIBREF3 . Despite these recent efforts and their progress, automatic judgment prediction in civil law system is still confronted with two main challenges:",
"One-to-Many Relation between Case and Plea. Every single civil case may contain multiple pleas and the result of each plea is co-determined by related law articles and specific aspects of the involved case. For example, in divorce proceedings, judgment of alienation of mutual affection is the key factor for granting divorce but custody of children depends on which side can provide better an environment for children's growth as well as parents' financial condition. Here, different pleas are independent.",
"Heterogeneity of Input Triple. Inputs to a judgment prediction system consist of three heterogeneous yet complementary parts, i.e., fact description, plaintiff's plea, and related law articles. Concatenating them together and treating them simply as a sequence of words as in previous works BIBREF2 , BIBREF1 would cause a great loss of information. This is the same in question-answering where the dual inputs, i.e., query and passage, should be modeled separately.",
"Despite the introduction of the neural networks that can learn better semantic representations of input text, it remains unsolved to incorporate proper mechanisms to integrate the complementary triple of pleas, fact descriptions, and law articles together.",
"Inspired by recent advances in question answering (QA) based reading comprehension (RC) BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , we propose the Legal Reading Comprehension (LRC) framework for automatic judgment prediction. LRC incorporates the reading mechanism for better modeling of the complementary inputs above-mentioned, as is done by human judges when referring to legal materials in search of supporting law articles. Reading mechanism, by simulating how human connects and integrates multiple text, has proven an effective module in RC tasks. We argue that applying the reading mechanism in a proper way among the triplets can obtain a better understanding and more informative representation of the original text, and further improve performance . To instantiate the framework, we propose an end-to-end neural network model named AutoJudge.",
"For experiments, we train and evaluate our models in the civil law system of mainland China. We collect and construct a large-scale real-world data set of INLINEFORM0 case documents that the Supreme People's Court of People's Republic of China has made publicly available. Fact description, pleas, and results can be extracted easily from these case documents with regular expressions, since the original documents have special typographical characteristics indicating the discourse structure. We also take into account law articles and their corresponding juridical interpretations. We also implement and evaluate previous methods on our dataset, which prove to be strong baselines.",
"Our experiment results show significant improvements over previous methods. Further experiments demonstrate that our model also achieves considerable improvement over other off-the-shelf state-of-the-art models under classification and question answering framework respectively. Ablation tests carried out by taking off some components of our model further prove its robustness and effectiveness.",
"To sum up, our contributions are as follows:",
"(1) We introduce reading mechanism and re-formalize judgment prediction as Legal Reading Comprehension to better model the complementary inputs.",
"(2) We construct a real-world dataset for experiments, and plan to publish it for further research.",
"(3) Besides baselines from previous works, we also carry out comprehensive experiments comparing different existing deep neural network methods on our dataset. Supported by these experiments, improvements achieved by LRC prove to be robust."
],
[
"Automatic judgment prediction has been studied for decades. At the very first stage of judgment prediction studies, researchers focus on mathematical and statistical analysis of existing cases, without any conclusions or methodologies on how to predict them BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 . Recent attempts consider judgment prediction under the text classification framework. Most of these works extract efficient features from text (e.g., N-grams) BIBREF15 , BIBREF4 , BIBREF1 , BIBREF16 , BIBREF17 or case profiles (e.g., dates, terms, locations and types) BIBREF2 . All these methods require a large amount of human effort to design features or annotate cases. Besides, they also suffer from generalization issue when applied to other scenarios.",
"Motivated by the successful application of deep neural networks, Luo et al. BIBREF3 introduce an attention-based neural model to predict charges of criminal cases, and verify the effectiveness of taking law articles into consideration. Nevertheless, they still fall into the text classification framework and lack the ability to handle multiple inputs with more complicated structures."
],
[
"As the basis of previous judgment prediction works, typical text classification task takes a single text content as input and predicts the category it belongs to. Recent works usually employ neural networks to model the internal structure of a single input BIBREF18 , BIBREF19 , BIBREF20 , BIBREF21 .",
"There also exists another thread of text classification called entailment prediction. Methods proposed in BIBREF22 , BIBREF23 are intended for complementary inputs, but the mechanisms can be considered as a simplified version of reading comprehension."
],
[
"Reading comprehension is a relevant task to model heterogeneous and complementary inputs, where an answer is predicted given two channels of inputs, i.e. a textual passage and a query. Considerable progress has been made BIBREF6 , BIBREF24 , BIBREF5 . These models employ various attention mechanism to model the interaction between passage and query. Inspired by the advantage of reading comprehension models on modeling multiple inputs, we apply this idea into the legal area and propose legal reading comprehension for judgment prediction."
],
[
"Conventional reading comprehension BIBREF25 , BIBREF26 , BIBREF7 , BIBREF8 usually considers reading comprehension as predicting the answer given a passage and a query, where the answer could be a single word, a text span of the original passage, chosen from answer candidates, or generated by human annotators.",
"Generally, an instance in RC is represented as a triple INLINEFORM0 , where INLINEFORM1 , INLINEFORM2 and INLINEFORM3 correspond to INLINEFORM4 , INLINEFORM5 and INLINEFORM6 respectively. Given a triple INLINEFORM7 , RC takes the pair INLINEFORM8 as the input and employs attention-based neural models to construct an efficient representation. Afterwards, the representation is fed into the output layer to select or generate an INLINEFORM9 ."
],
[
"Existing works usually formalize judgment prediction as a text classification task and focus on extracting well-designed features of specific cases. Such simplification ignores that the judgment of a case is determined by its fact description and multiple pleas. Moreover, the final judgment should act up to the legal provisions, especially in civil law systems. Therefore, how to integrate the information (i.e., fact descriptions, pleas, and law articles) in a reasonable way is critical for judgment prediction.",
"Inspired by the successful application of RC, we propose a framework of Legal Reading Comprehension(LRC) for judgment prediction in the legal area. As illustrated in Fig. FIGREF1 , for each plea in a given case, the prediction of judgment result is made based the fact description and the potentially relevant law articles.",
"In a nutshell, LRC can be formalized as the following quadruplet task: DISPLAYFORM0 ",
"where INLINEFORM0 is the fact description, INLINEFORM1 is the plea, INLINEFORM2 is the law articles and INLINEFORM3 is the result. Given INLINEFORM4 , LRC aims to predict the judgment result as DISPLAYFORM0 ",
"The probability is calculated with respect to the interaction among the triple INLINEFORM0 , which will draw on the experience of the interaction between INLINEFORM1 pairs in RC.",
"To summarize, LRC is innovative in the following aspects:",
"(1) While previous works fit the problem into text classification framework, LRC re-formalizes the way to approach such problems. This new framework provides the ability to deal with the heterogeneity of the complementary inputs.",
"(2) Rather than employing conventional RC models to handle pair-wise text information in the legal area, LRC takes the critical law articles into consideration and models the facts, pleas, and law articles jointly for judgment prediction, which is more suitable to simulate the human mode of dealing with cases."
],
[
"We propose a novel judgment prediction model AutoJudge to instantiate the LRC framework. As shown in Fig. FIGREF6 , AutoJudge consists of three flexible modules, including a text encoder, a pair-wise attentive reader, and an output module.",
"In the following parts, we give a detailed introduction to these three modules."
],
[
"As illustrated in Fig. FIGREF6 , Text Encoder aims to encode the word sequences of inputs into continuous representation sequences.",
"Formally, consider a fact description INLINEFORM0 , a plea INLINEFORM1 , and the relevant law articles INLINEFORM2 , where INLINEFORM3 denotes the INLINEFORM4 -th word in the sequence and INLINEFORM5 are the lengths of word sequences INLINEFORM6 respectively. First, we convert the words to their respective word embeddings to obtain INLINEFORM7 , INLINEFORM8 and INLINEFORM9 , where INLINEFORM10 . Afterwards, we employ bi-directional GRU BIBREF27 , BIBREF28 , BIBREF29 to produce the encoded representation INLINEFORM11 of all words as follows: DISPLAYFORM0 ",
"Note that, we adopt different bi-directional GRUs to encode fact descriptions, pleas, and law articles respectively(denoted as INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 ). With these text encoders, INLINEFORM3 , INLINEFORM4 , and INLINEFORM5 are converting into INLINEFORM6 , INLINEFORM7 , and INLINEFORM8 ."
],
[
"How to model the interactions among the input text is the most important problem in reading comprehension. In AutoJudge, we employ a pair-wise attentive reader to process INLINEFORM0 and INLINEFORM1 respectively. More specifically, we propose to use pair-wise mutual attention mechanism to capture the complex semantic interaction between text pairs, as well as increasing the interpretability of AutoJudge.",
"For each input pair INLINEFORM0 or INLINEFORM1 , we employ pair-wise mutual attention to select relevant information from fact descriptions INLINEFORM2 and produce more informative representation sequences.",
"As a variant of the original attention mechanism BIBREF28 , we design the pair-wise mutual attention unit as a GRU with internal memories denoted as mGRU.",
"Taking the representation sequence pair INLINEFORM0 for instance, mGRU stores the fact sequence INLINEFORM1 into its memories. For each timestamp INLINEFORM2 , it selects relevant fact information INLINEFORM3 from the memories as follows, DISPLAYFORM0 ",
"Here, the weight INLINEFORM0 is the softmax value as DISPLAYFORM0 ",
"Note that, INLINEFORM0 represents the relevance between INLINEFORM1 and INLINEFORM2 . It is calculated as follows, DISPLAYFORM0 ",
"Here, INLINEFORM0 is the last hidden state in the GRU, which will be introduced in the following part. INLINEFORM1 is a weight vector, and INLINEFORM2 , INLINEFORM3 , INLINEFORM4 are attention metrics of our proposed pair-wise attention mechanism.",
"With the relevant fact information INLINEFORM0 and INLINEFORM1 , we get the INLINEFORM2 -th input of mGRU as DISPLAYFORM0 ",
"where INLINEFORM0 indicates the concatenation operation.",
"Then, we feed INLINEFORM0 into GRU to get more informative representation sequence INLINEFORM1 as follows, DISPLAYFORM0 ",
"For the input pair INLINEFORM0 , we can get INLINEFORM1 in the same way. Therefore, we omit the implementation details Here.",
"Similar structures with attention mechanism are also applied in BIBREF5 , BIBREF30 , BIBREF31 , BIBREF28 to obtain mutually aware representations in reading comprehension models, which significantly improve the performance of this task."
],
[
"Using text encoder and pair-wise attentive reader, the initial input triple INLINEFORM0 has been converted into two sequences, i.e., INLINEFORM1 and INLINEFORM2 , where INLINEFORM3 is defined similarly to INLINEFORM4 . These sequences reserve complex semantic information about the pleas and law articles, and filter out irrelevant information in fact descriptions.",
"With these two sequences, we concatenate INLINEFORM0 and INLINEFORM1 along the sequence length dimension to generate the sequence INLINEFORM2 . Since we have employed several GRU layers to encode the sequential inputs, another recurrent layer may be redundant. Therefore, we utilize a 1-layer CNN BIBREF18 to capture the local structure and generate the representation vector for the final prediction.",
"Assuming INLINEFORM0 is the predicted probability that the plea in the case sample would be supported and INLINEFORM1 is the gold standard, AutoJudge aims to minimize the cross-entropy as follows, DISPLAYFORM0 ",
"where INLINEFORM0 is the number of training data. As all the calculation in our model is differentiable, we employ Adam BIBREF32 for optimization."
],
[
"To evaluate the proposed LRC framework and the AutoJudge model, we carry out a series of experiments on the divorce proceedings, a typical yet complex field of civil cases. Divorce proceedings often come with several kinds of pleas, e.g. seeking divorce, custody of children, compensation, and maintenance, which focuses on different aspects and thus makes it a challenge for judgment prediction."
],
[
"Since none of the datasets from previous works have been published, we decide to build a new one. We randomly collect INLINEFORM0 cases from China Judgments Online, among which INLINEFORM1 cases are for training, INLINEFORM2 each for validation and testing. Among the original cases, INLINEFORM3 are granted divorce and others not. There are INLINEFORM4 valid pleas in total, with INLINEFORM5 supported and INLINEFORM6 rejected. Note that, if the divorce plea in a case is not granted, the other pleas of this case will not be considered by the judge. Case materials are all natural language sentences, with averagely INLINEFORM7 tokens per fact description and INLINEFORM8 per plea. There are 62 relevant law articles in total, each with INLINEFORM9 tokens averagely. Note that the case documents include special typographical signals, making it easy to extract labeled data with regular expression.",
"We apply some rules with legal prior to preprocess the dataset according to previous works BIBREF33 , BIBREF34 , BIBREF35 , which have proved effective in our experiments.",
"Name Replacement: All names in case documents are replaced with marks indicating their roles, instead of simply anonymizing them, e.g. <Plantiff>, <Defendant>, <Daughter_x> and so on. Since “all are equal before the law”, names should make no more difference than what role they take.",
"Law Article Filtration : Since most accessible divorce proceeding documents do not contain ground-truth fine-grained articles, we use an unsupervised method instead. First, we extract all the articles from the law text with regular expression. Afterwards, we select the most relevant 10 articles according to the fact descriptions as follows. We obtain sentence representation with CBOW BIBREF36 , BIBREF37 weighted by inverse document frequency, and calculate cosine distance between cases and law articles. Word embeddings are pre-trained with Chinese Wikipedia pages. As the final step, we extract top 5 relevant articles for each sample respectively from the main marriage law articles and their interpretations, which are equally important. We manually check the extracted articles for 100 cases to ensure that the extraction quality is fairly good and acceptable.",
"The filtration process is automatic and fully unsupervised since the original documents have no ground-truth labels for fine-grained law articles, and coarse-grained law-articles only provide limited information. We also experiment with the ground-truth articles, but only a small fraction of them has fine-grained ones, and they are usually not available in real-world scenarios."
],
[
"We employ Jieba for Chinese word segmentation and keep the top INLINEFORM0 frequent words. The word embedding size is set to 128 and the other low-frequency words are replaced with the mark <UNK>. The hidden size of GRU is set to 128 for each direction in Bi-GRU. In the pair-wise attentive reader, the hidden state is set to 256 for mGRu. In the CNN layer, filter windows are set to 1, 3, 4, and 5 with each filter containing 200 feature maps. We add a dropout layer BIBREF38 after the CNN layer with a dropout rate of INLINEFORM1 . We use Adam BIBREF32 for training and set learning rate to INLINEFORM2 , INLINEFORM3 to INLINEFORM4 , INLINEFORM5 to INLINEFORM6 , INLINEFORM7 to INLINEFORM8 , batch size to 64. We employ precision, recall, F1 and accuracy for evaluation metrics. We repeat all the experiments for 10 times, and report the average results."
],
[
"For comparison, we adopt and re-implement three kinds of baselines as follows:",
"We implement an SVM with lexical features in accordance with previous works BIBREF16 , BIBREF17 , BIBREF1 , BIBREF15 , BIBREF4 and select the best feature set on the development set.",
"We implement and fine-tune a series of neural text classifiers, including attention-based method BIBREF3 and other methods we deem important. CNN BIBREF18 and GRU BIBREF27 , BIBREF21 take as input the concatenation of fact description and plea. Similarly, CNN/GRU+law refers to using the concatenation of fact description, plea and law articles as inputs.",
"We implement and train some off-the-shelf RC models, including r-net BIBREF5 and AoA BIBREF6 , which are the leading models on SQuAD leaderboard. In our initial experiments, these models take fact description as passage and plea as query. Further, Law articles are added to the fact description as a part of the reading materials, which is a simple way to consider them as well."
],
[
"From Table TABREF37 , we have the following observations:",
"(1) AutoJudge consistently and significantly outperforms all the baselines, including RC models and other neural text classification models, which shows the effectiveness and robustness of our model.",
"(2) RC models achieve better performance than most text classification models (excluding GRU+Attention), which indicates that reading mechanism is a better way to integrate information from heterogeneous yet complementary inputs. On the contrary, simply adding law articles as a part of the reading materials makes no difference in performance. Note that, GRU+Attention employ similar attention mechanism as RC does and takes additional law articles into consideration, thus achieves comparable performance with RC models.",
"(3) Comparing with conventional RC models, AutoJudge achieves significant improvement with the consideration of additional law articles. It reflects the difference between LRC and conventional RC models. We re-formalize LRC in legal area to incorporate law articles via the reading mechanism, which can enhance judgment prediction. Moreover, CNN/GRU+law decrease the performance by simply concatenating original text with law articles, while GRU+Attention/AutoJudge increase the performance by integrating law articles with attention mechanism. It shows the importance and rationality of using attention mechanism to capture the interaction between multiple inputs.",
"The experiments support our hypothesis as proposed in the Introduction part that in civil cases, it's important to model the interactions among case materials. Reading mechanism can well perform the matching among them."
],
[
"AutoJudge is characterized by the incorporation of pair-wise attentive reader, law articles, and a CNN output layer, as well as some pre-processing with legal prior. We design ablation tests respectively to evaluate the effectiveness of these modules. When taken off the attention mechanism, AutoJudge degrades into a GRU on which a CNN is stacked. When taken off law articles, the CNN output layer only takes INLINEFORM0 as input. Besides, our model is tested respectively without name-replacement or unsupervised selection of law articles (i.e. passing the whole law text). As mentioned above, we system use law articles extracted with unsupervised method, so we also experiment with ground-truth law articles.",
"Results are shown in Table TABREF38 . We can infer that:",
"(1) The performance drops significantly after removing the attention layer or excluding the law articles, which is consistent with the comparison between AutoJudge and baselines. The result verifies that both the reading mechanism and incorporation of law articles are important and effective.",
"(2) After replacing CNN with an LSTM layer, performance drops as much as INLINEFORM0 in accuracy and INLINEFORM1 in F1 score. The reason may be the redundancy of RNNs. AutoJudge has employed several GRU layers to encode text sequences. Another RNN layer may be useless to capture sequential dependencies, while CNN can catch the local structure in convolution windows.",
"(3) Motivated by existing rule-based works, we conduct data pre-processing on cases, including name replacement and law article filtration. If we remove the pre-processing operations, the performance drops considerably. It demonstrates that applying the prior knowledge in legal filed would benefit the understanding of legal cases.",
"It's intuitive that the quality of the retrieved law articles would affect the final performance. As is shown in Table TABREF38 , feeding the whole law text without filtration results in worse performance. However, when we train and evaluate our model with ground truth articles, the performance is boosted by nearly INLINEFORM0 in both F1 and Acc. The performance improvement is quite limited compared to that in previous work BIBREF3 for the following reasons: (1) As mentioned above, most case documents only contain coarse-grained articles, and only a small number of them contain fine-grained ones, which has limited information in themselves. (2) Unlike in criminal cases where the application of an article indicates the corresponding crime, law articles in civil cases work as reference, and can be applied in both the cases of supports and rejects. As law articles cut both ways for the judgment result, this is one of the characteristics that distinguishes civil cases from criminal ones. We also need to remember that, the performance of INLINEFORM1 in accuracy or INLINEFORM2 in F1 score is unattainable in real-world setting for automatic prediction where ground-truth articles are not available.",
"In the area of civil cases, the understanding of the case materials and how they interact is a critical factor. The inclusion of law articles is not enough. As is shown in Table TABREF38 , compared to feeding the model with an un-selected set of law articles, taking away the reading mechanism results in greater performance drop. Therefore, the ability to read, understand and select relevant information from the complex multi-sourced case materials is necessary. It's even more important in real world since we don't have access to ground-truth law articles to make predictions."
],
[
"We visualize the heat maps of attention results. As shown in Fig. FIGREF47 , deeper background color represents larger attention score. The attention score is calculated with Eq. ( EQREF15 ). We take the average of the resulting INLINEFORM0 attention matrix over the time dimension to obtain attention values for each word.",
"The visualization demonstrates that the attention mechanism can capture relevant patterns and semantics in accordance with different pleas in different cases.",
"As for the failed samples, the most common reason comes from the anonymity issue, which is also shown in Fig. FIGREF47 . As mentioned above, we conduct name replacement. However, some critical elements are also anonymized by the government, due to the privacy issue. These elements are sometimes important to judgment prediction. For example, determination of the key factor long-time separation is relevant to the explicit dates, which are anonymized."
],
[
"In this paper, we explore the task of predicting judgments of civil cases. Comparing with conventional text classification framework, we propose Legal Reading Comprehension framework to handle multiple and complex textual inputs. Moreover, we present a novel neural model, AutoJudge, to incorporate law articles for judgment prediction. In experiments, we compare our model on divorce proceedings with various state-of-the-art baselines of various frameworks. Experimental results show that our model achieves considerable improvement than all the baselines. Besides, visualization results also demonstrate the effectiveness and interpretability of our proposed model.",
"In the future, we can explore the following directions: (1) Limited by the datasets, we can only verify our proposed model on divorce proceedings. A more general and larger dataset will benefit the research on judgment prediction. (2) Judicial decisions in some civil cases are not always binary, but more diverse and flexible ones, e.g. compensation amount. Thus, it is critical for judgment prediction to manage various judgment forms."
]
],
"section_name": [
"Introduction",
"Judgment Prediction",
"Text Classification",
"Reading Comprehension",
"Conventional Reading Comprehension",
"Legal Reading Comprehension",
"Methods",
"Text Encoder",
"Pair-Wise Attentive Reader",
"Output Layer",
"Experiments",
"Dataset Construction for Evaluation",
"Implementation Details",
"Baselines",
"Results and Analysis",
"Ablation Test",
"Case Study",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"92c1100baf27171886b19b369bc9c9204562bc03"
],
"answer": [
{
"evidence": [
"(1) AutoJudge consistently and significantly outperforms all the baselines, including RC models and other neural text classification models, which shows the effectiveness and robustness of our model.",
"(2) RC models achieve better performance than most text classification models (excluding GRU+Attention), which indicates that reading mechanism is a better way to integrate information from heterogeneous yet complementary inputs. On the contrary, simply adding law articles as a part of the reading materials makes no difference in performance. Note that, GRU+Attention employ similar attention mechanism as RC does and takes additional law articles into consideration, thus achieves comparable performance with RC models.",
"(3) Comparing with conventional RC models, AutoJudge achieves significant improvement with the consideration of additional law articles. It reflects the difference between LRC and conventional RC models. We re-formalize LRC in legal area to incorporate law articles via the reading mechanism, which can enhance judgment prediction. Moreover, CNN/GRU+law decrease the performance by simply concatenating original text with law articles, while GRU+Attention/AutoJudge increase the performance by integrating law articles with attention mechanism. It shows the importance and rationality of using attention mechanism to capture the interaction between multiple inputs."
],
"extractive_spans": [
"AutoJudge consistently and significantly outperforms all the baselines",
"RC models achieve better performance than most text classification models (excluding GRU+Attention)",
"Comparing with conventional RC models, AutoJudge achieves significant improvement"
],
"free_form_answer": "",
"highlighted_evidence": [
"(1) AutoJudge consistently and significantly outperforms all the baselines, including RC models and other neural text classification models, which shows the effectiveness and robustness of our model.",
"(2) RC models achieve better performance than most text classification models (excluding GRU+Attention), which indicates that reading mechanism is a better way to integrate information from heterogeneous yet complementary inputs.",
"(3) Comparing with conventional RC models, AutoJudge achieves significant improvement with the consideration of additional law articles."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"653cf60aec4ba3d524f77af342a0c643d1b0e11f",
"6b87d17119a851b3a262a576d9b15d49c1848da1"
],
"answer": [
{
"evidence": [
"We employ Jieba for Chinese word segmentation and keep the top INLINEFORM0 frequent words. The word embedding size is set to 128 and the other low-frequency words are replaced with the mark <UNK>. The hidden size of GRU is set to 128 for each direction in Bi-GRU. In the pair-wise attentive reader, the hidden state is set to 256 for mGRu. In the CNN layer, filter windows are set to 1, 3, 4, and 5 with each filter containing 200 feature maps. We add a dropout layer BIBREF38 after the CNN layer with a dropout rate of INLINEFORM1 . We use Adam BIBREF32 for training and set learning rate to INLINEFORM2 , INLINEFORM3 to INLINEFORM4 , INLINEFORM5 to INLINEFORM6 , INLINEFORM7 to INLINEFORM8 , batch size to 64. We employ precision, recall, F1 and accuracy for evaluation metrics. We repeat all the experiments for 10 times, and report the average results."
],
"extractive_spans": [
"precision, recall, F1 and accuracy"
],
"free_form_answer": "",
"highlighted_evidence": [
"We employ precision, recall, F1 and accuracy for evaluation metrics."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We employ Jieba for Chinese word segmentation and keep the top INLINEFORM0 frequent words. The word embedding size is set to 128 and the other low-frequency words are replaced with the mark <UNK>. The hidden size of GRU is set to 128 for each direction in Bi-GRU. In the pair-wise attentive reader, the hidden state is set to 256 for mGRu. In the CNN layer, filter windows are set to 1, 3, 4, and 5 with each filter containing 200 feature maps. We add a dropout layer BIBREF38 after the CNN layer with a dropout rate of INLINEFORM1 . We use Adam BIBREF32 for training and set learning rate to INLINEFORM2 , INLINEFORM3 to INLINEFORM4 , INLINEFORM5 to INLINEFORM6 , INLINEFORM7 to INLINEFORM8 , batch size to 64. We employ precision, recall, F1 and accuracy for evaluation metrics. We repeat all the experiments for 10 times, and report the average results."
],
"extractive_spans": [
"precision",
"recall",
"F1 ",
"accuracy "
],
"free_form_answer": "",
"highlighted_evidence": [
"We employ precision, recall, F1 and accuracy for evaluation metrics. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"4db64df31608ee95f5c7deb00379bd91c2edd0fa",
"a5e7ee0a2216fe02bc5a0ccf784cdb856cee8e6e"
],
"answer": [
{
"evidence": [
"To evaluate the proposed LRC framework and the AutoJudge model, we carry out a series of experiments on the divorce proceedings, a typical yet complex field of civil cases. Divorce proceedings often come with several kinds of pleas, e.g. seeking divorce, custody of children, compensation, and maintenance, which focuses on different aspects and thus makes it a challenge for judgment prediction."
],
"extractive_spans": [
"divorce "
],
"free_form_answer": "",
"highlighted_evidence": [
"To evaluate the proposed LRC framework and the AutoJudge model, we carry out a series of experiments on the divorce proceedings, a typical yet complex field of civil cases. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"To evaluate the proposed LRC framework and the AutoJudge model, we carry out a series of experiments on the divorce proceedings, a typical yet complex field of civil cases. Divorce proceedings often come with several kinds of pleas, e.g. seeking divorce, custody of children, compensation, and maintenance, which focuses on different aspects and thus makes it a challenge for judgment prediction."
],
"extractive_spans": [
"divorce"
],
"free_form_answer": "",
"highlighted_evidence": [
"To evaluate the proposed LRC framework and the AutoJudge model, we carry out a series of experiments on the divorce proceedings, a typical yet complex field of civil cases. Divorce proceedings often come with several kinds of pleas, e.g. seeking divorce, custody of children, compensation, and maintenance, which focuses on different aspects and thus makes it a challenge for judgment prediction."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"4c9da349f0b5775780db2837c1d967710089e03a",
"9a8fd85db178960d1d79cc60bb4b6e2e79f731c7"
],
"answer": [
{
"evidence": [
"For comparison, we adopt and re-implement three kinds of baselines as follows:",
"We implement an SVM with lexical features in accordance with previous works BIBREF16 , BIBREF17 , BIBREF1 , BIBREF15 , BIBREF4 and select the best feature set on the development set.",
"We implement and fine-tune a series of neural text classifiers, including attention-based method BIBREF3 and other methods we deem important. CNN BIBREF18 and GRU BIBREF27 , BIBREF21 take as input the concatenation of fact description and plea. Similarly, CNN/GRU+law refers to using the concatenation of fact description, plea and law articles as inputs.",
"We implement and train some off-the-shelf RC models, including r-net BIBREF5 and AoA BIBREF6 , which are the leading models on SQuAD leaderboard. In our initial experiments, these models take fact description as passage and plea as query. Further, Law articles are added to the fact description as a part of the reading materials, which is a simple way to consider them as well.",
"FLOAT SELECTED: Table 1: Experimental results(%). P/R/F1 are reported for positive samples and calculated as the mean score over 10-time experiments. Acc is defined as the proportion of test samples classified correctly, equal to micro-precision. MaxFreq refers to always predicting the most frequent label, i.e. support in our dataset. * indicates methods proposed in previous works."
],
"extractive_spans": [
"SVM ",
"CNN ",
"GRU ",
"CNN/GRU+law",
"r-net ",
"AoA "
],
"free_form_answer": "",
"highlighted_evidence": [
"For comparison, we adopt and re-implement three kinds of baselines as follows:\n\nWe implement an SVM with lexical features in accordance with previous works BIBREF16 , BIBREF17 , BIBREF1 , BIBREF15 , BIBREF4 and select the best feature set on the development set.\n\nWe implement and fine-tune a series of neural text classifiers, including attention-based method BIBREF3 and other methods we deem important. CNN BIBREF18 and GRU BIBREF27 , BIBREF21 take as input the concatenation of fact description and plea. Similarly, CNN/GRU+law refers to using the concatenation of fact description, plea and law articles as inputs.\n\nWe implement and train some off-the-shelf RC models, including r-net BIBREF5 and AoA BIBREF6 , which are the leading models on SQuAD leaderboard. In our initial experiments, these models take fact description as passage and plea as query. Further, Law articles are added to the fact description as a part of the reading materials, which is a simple way to consider them as well.",
"FLOAT SELECTED: Table 1: Experimental results(%). P/R/F1 are reported for positive samples and calculated as the mean score over 10-time experiments. Acc is defined as the proportion of test samples classified correctly, equal to micro-precision. MaxFreq refers to always predicting the most frequent label, i.e. support in our dataset. * indicates methods proposed in previous works."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"For comparison, we adopt and re-implement three kinds of baselines as follows:",
"We implement an SVM with lexical features in accordance with previous works BIBREF16 , BIBREF17 , BIBREF1 , BIBREF15 , BIBREF4 and select the best feature set on the development set.",
"We implement and fine-tune a series of neural text classifiers, including attention-based method BIBREF3 and other methods we deem important. CNN BIBREF18 and GRU BIBREF27 , BIBREF21 take as input the concatenation of fact description and plea. Similarly, CNN/GRU+law refers to using the concatenation of fact description, plea and law articles as inputs.",
"We implement and train some off-the-shelf RC models, including r-net BIBREF5 and AoA BIBREF6 , which are the leading models on SQuAD leaderboard. In our initial experiments, these models take fact description as passage and plea as query. Further, Law articles are added to the fact description as a part of the reading materials, which is a simple way to consider them as well.",
"In this paper, we explore the task of predicting judgments of civil cases. Comparing with conventional text classification framework, we propose Legal Reading Comprehension framework to handle multiple and complex textual inputs. Moreover, we present a novel neural model, AutoJudge, to incorporate law articles for judgment prediction. In experiments, we compare our model on divorce proceedings with various state-of-the-art baselines of various frameworks. Experimental results show that our model achieves considerable improvement than all the baselines. Besides, visualization results also demonstrate the effectiveness and interpretability of our proposed model."
],
"extractive_spans": [
"SVM with lexical features in accordance with previous works BIBREF16 , BIBREF17 , BIBREF1 , BIBREF15 , BIBREF4",
"attention-based method BIBREF3 and other methods we deem important",
"some off-the-shelf RC models, including r-net BIBREF5 and AoA BIBREF6 , which are the leading models on SQuAD leaderboard"
],
"free_form_answer": "",
"highlighted_evidence": [
"For comparison, we adopt and re-implement three kinds of baselines as follows:\n\nWe implement an SVM with lexical features in accordance with previous works BIBREF16 , BIBREF17 , BIBREF1 , BIBREF15 , BIBREF4 and select the best feature set on the development set.\n\nWe implement and fine-tune a series of neural text classifiers, including attention-based method BIBREF3 and other methods we deem important. CNN BIBREF18 and GRU BIBREF27 , BIBREF21 take as input the concatenation of fact description and plea. Similarly, CNN/GRU+law refers to using the concatenation of fact description, plea and law articles as inputs.\n\nWe implement and train some off-the-shelf RC models, including r-net BIBREF5 and AoA BIBREF6 , which are the leading models on SQuAD leaderboard. ",
"Moreover, we present a novel neural model, AutoJudge, to incorporate law articles for judgment prediction. In experiments, we compare our model on divorce proceedings with various state-of-the-art baselines of various frameworks."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"3a25946ae2e4b643de0d4e073557b0da452ebe3f",
"ef74abc7b96df4b7e89a3dff9beb19e9e97888d2"
],
"answer": [
{
"evidence": [
"For experiments, we train and evaluate our models in the civil law system of mainland China. We collect and construct a large-scale real-world data set of INLINEFORM0 case documents that the Supreme People's Court of People's Republic of China has made publicly available. Fact description, pleas, and results can be extracted easily from these case documents with regular expressions, since the original documents have special typographical characteristics indicating the discourse structure. We also take into account law articles and their corresponding juridical interpretations. We also implement and evaluate previous methods on our dataset, which prove to be strong baselines."
],
"extractive_spans": [],
"free_form_answer": "100 000 documents",
"highlighted_evidence": [
"For experiments, we train and evaluate our models in the civil law system of mainland China. We collect and construct a large-scale real-world data set of INLINEFORM0 case documents that the Supreme People's Court of People's Republic of China has made publicly available"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Since none of the datasets from previous works have been published, we decide to build a new one. We randomly collect INLINEFORM0 cases from China Judgments Online, among which INLINEFORM1 cases are for training, INLINEFORM2 each for validation and testing. Among the original cases, INLINEFORM3 are granted divorce and others not. There are INLINEFORM4 valid pleas in total, with INLINEFORM5 supported and INLINEFORM6 rejected. Note that, if the divorce plea in a case is not granted, the other pleas of this case will not be considered by the judge. Case materials are all natural language sentences, with averagely INLINEFORM7 tokens per fact description and INLINEFORM8 per plea. There are 62 relevant law articles in total, each with INLINEFORM9 tokens averagely. Note that the case documents include special typographical signals, making it easy to extract labeled data with regular expression."
],
"extractive_spans": [
" INLINEFORM1 cases"
],
"free_form_answer": "",
"highlighted_evidence": [
"We randomly collect INLINEFORM0 cases from China Judgments Online, among which INLINEFORM1 cases are for training, INLINEFORM2 each for validation and testing. Among the original cases, INLINEFORM3 are granted divorce and others not. There are INLINEFORM4 valid pleas in total, with INLINEFORM5 supported and INLINEFORM6 rejected."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"475ce1d8cb657dfa0842626d9a2e38cca61b5094"
],
"answer": [
{
"evidence": [
"Since none of the datasets from previous works have been published, we decide to build a new one. We randomly collect INLINEFORM0 cases from China Judgments Online, among which INLINEFORM1 cases are for training, INLINEFORM2 each for validation and testing. Among the original cases, INLINEFORM3 are granted divorce and others not. There are INLINEFORM4 valid pleas in total, with INLINEFORM5 supported and INLINEFORM6 rejected. Note that, if the divorce plea in a case is not granted, the other pleas of this case will not be considered by the judge. Case materials are all natural language sentences, with averagely INLINEFORM7 tokens per fact description and INLINEFORM8 per plea. There are 62 relevant law articles in total, each with INLINEFORM9 tokens averagely. Note that the case documents include special typographical signals, making it easy to extract labeled data with regular expression."
],
"extractive_spans": [
"build a new one",
"collect INLINEFORM0 cases from China Judgments Online"
],
"free_form_answer": "",
"highlighted_evidence": [
"Since none of the datasets from previous works have been published, we decide to build a new one. We randomly collect INLINEFORM0 cases from China Judgments Online, among which INLINEFORM1 cases are for training, INLINEFORM2 each for validation and testing."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"",
"",
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
"",
"",
""
],
"question": [
"what are their results on the constructed dataset?",
"what evaluation metrics are reported?",
"what civil field is the dataset about?",
"what are the state-of-the-art models?",
"what is the size of the real-world civil case dataset?",
"what datasets are used in the experiment?"
],
"question_id": [
"aa2948209cc33b071dbf294822e72bb136678345",
"d9412dda3279729e95fcb35cbed09e61577a896e",
"41b70699514703820435b00efbc3aac4dd67560a",
"e3c9e4bc7bb93461856e1f4354f33010bc7d28d5",
"06cc8fcafc0880cf69a2514bb7341642b9833041",
"d650101712e36594bd77b45930a990402a455222"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
"",
"",
""
]
} | {
"caption": [
"Figure 1: An Example of LRC.",
"Figure 2: An overview of AutoJudge.",
"Table 2: Experimental results of ablation tests (%).",
"Table 1: Experimental results(%). P/R/F1 are reported for positive samples and calculated as the mean score over 10-time experiments. Acc is defined as the proportion of test samples classified correctly, equal to micro-precision. MaxFreq refers to always predicting the most frequent label, i.e. support in our dataset. * indicates methods proposed in previous works.",
"Figure 3: Visualization of Attention Mechanism."
],
"file": [
"1-Figure1-1.png",
"4-Figure2-1.png",
"7-Table2-1.png",
"7-Table1-1.png",
"8-Figure3-1.png"
]
} | [
"what is the size of the real-world civil case dataset?"
] | [
[
"1809.06537-Dataset Construction for Evaluation-0",
"1809.06537-Introduction-6"
]
] | [
"100 000 documents"
] | 200 |
1908.08345 | Text Summarization with Pretrained Encoders | Bidirectional Encoder Representations from Transformers (BERT) represents the latest incarnation of pretrained language models which have recently advanced a wide range of natural language processing tasks. In this paper, we showcase how BERT can be usefully applied in text summarization and propose a general framework for both extractive and abstractive models. We introduce a novel document-level encoder based on BERT which is able to express the semantics of a document and obtain representations for its sentences. Our extractive model is built on top of this encoder by stacking several inter-sentence Transformer layers. For abstractive summarization, we propose a new fine-tuning schedule which adopts different optimizers for the encoder and the decoder as a means of alleviating the mismatch between the two (the former is pretrained while the latter is not). We also demonstrate that a two-staged fine-tuning approach can further boost the quality of the generated summaries. Experiments on three datasets show that our model achieves state-of-the-art results across the board in both extractive and abstractive settings. Our code is available at this https URL | {
"paragraphs": [
[
"Language model pretraining has advanced the state of the art in many NLP tasks ranging from sentiment analysis, to question answering, natural language inference, named entity recognition, and textual similarity. State-of-the-art pretrained models include ELMo BIBREF1, GPT BIBREF2, and more recently Bidirectional Encoder Representations from Transformers (Bert; BIBREF0). Bert combines both word and sentence representations in a single very large Transformer BIBREF3; it is pretrained on vast amounts of text, with an unsupervised objective of masked language modeling and next-sentence prediction and can be fine-tuned with various task-specific objectives.",
"In most cases, pretrained language models have been employed as encoders for sentence- and paragraph-level natural language understanding problems BIBREF0 involving various classification tasks (e.g., predicting whether any two sentences are in an entailment relationship; or determining the completion of a sentence among four alternative sentences). In this paper, we examine the influence of language model pretraining on text summarization. Different from previous tasks, summarization requires wide-coverage natural language understanding going beyond the meaning of individual words and sentences. The aim is to condense a document into a shorter version while preserving most of its meaning. Furthermore, under abstractive modeling formulations, the task requires language generation capabilities in order to create summaries containing novel words and phrases not featured in the source text, while extractive summarization is often defined as a binary classification task with labels indicating whether a text span (typically a sentence) should be included in the summary.",
"We explore the potential of Bert for text summarization under a general framework encompassing both extractive and abstractive modeling paradigms. We propose a novel document-level encoder based on Bert which is able to encode a document and obtain representations for its sentences. Our extractive model is built on top of this encoder by stacking several inter-sentence Transformer layers to capture document-level features for extracting sentences. Our abstractive model adopts an encoder-decoder architecture, combining the same pretrained Bert encoder with a randomly-initialized Transformer decoder BIBREF3. We design a new training schedule which separates the optimizers of the encoder and the decoder in order to accommodate the fact that the former is pretrained while the latter must be trained from scratch. Finally, motivated by previous work showing that the combination of extractive and abstractive objectives can help generate better summaries BIBREF4, we present a two-stage approach where the encoder is fine-tuned twice, first with an extractive objective and subsequently on the abstractive summarization task.",
"We evaluate the proposed approach on three single-document news summarization datasets representative of different writing conventions (e.g., important information is concentrated at the beginning of the document or distributed more evenly throughout) and summary styles (e.g., verbose vs. more telegraphic; extractive vs. abstractive). Across datasets, we experimentally show that the proposed models achieve state-of-the-art results under both extractive and abstractive settings. Our contributions in this work are three-fold: a) we highlight the importance of document encoding for the summarization task; a variety of recently proposed techniques aim to enhance summarization performance via copying mechanisms BIBREF5, BIBREF6, BIBREF7, reinforcement learning BIBREF8, BIBREF9, BIBREF10, and multiple communicating encoders BIBREF11. We achieve better results with a minimum-requirement model without using any of these mechanisms; b) we showcase ways to effectively employ pretrained language models in summarization under both extractive and abstractive settings; we would expect any improvements in model pretraining to translate in better summarization in the future; and c) the proposed models can be used as a stepping stone to further improve summarization performance as well as baselines against which new proposals are tested."
],
[
"Pretrained language models BIBREF1, BIBREF2, BIBREF0, BIBREF12, BIBREF13 have recently emerged as a key technology for achieving impressive gains in a wide variety of natural language tasks. These models extend the idea of word embeddings by learning contextual representations from large-scale corpora using a language modeling objective. Bidirectional Encoder Representations from Transformers (Bert; BIBREF0) is a new language representation model which is trained with a masked language modeling and a “next sentence prediction” task on a corpus of 3,300M words.",
"The general architecture of Bert is shown in the left part of Figure FIGREF2. Input text is first preprocessed by inserting two special tokens. [cls] is appended to the beginning of the text; the output representation of this token is used to aggregate information from the whole sequence (e.g., for classification tasks). And token [sep] is inserted after each sentence as an indicator of sentence boundaries. The modified text is then represented as a sequence of tokens $X=[w_1,w_2,\\cdots ,w_n]$. Each token $w_i$ is assigned three kinds of embeddings: token embeddings indicate the meaning of each token, segmentation embeddings are used to discriminate between two sentences (e.g., during a sentence-pair classification task) and position embeddings indicate the position of each token within the text sequence. These three embeddings are summed to a single input vector $x_i$ and fed to a bidirectional Transformer with multiple layers:",
"",
"where $h^0=x$ are the input vectors; $\\mathrm {LN}$ is the layer normalization operation BIBREF14; $\\mathrm {MHAtt}$ is the multi-head attention operation BIBREF3; superscript $l$ indicates the depth of the stacked layer. On the top layer, Bert will generate an output vector $t_i$ for each token with rich contextual information.",
"Pretrained language models are usually used to enhance performance in language understanding tasks. Very recently, there have been attempts to apply pretrained models to various generation problems BIBREF15, BIBREF16. When fine-tuning for a specific task, unlike ELMo whose parameters are usually fixed, parameters in Bert are jointly fine-tuned with additional task-specific parameters."
],
[
"Extractive summarization systems create a summary by identifying (and subsequently concatenating) the most important sentences in a document. Neural models consider extractive summarization as a sentence classification problem: a neural encoder creates sentence representations and a classifier predicts which sentences should be selected as summaries. SummaRuNNer BIBREF7 is one of the earliest neural approaches adopting an encoder based on Recurrent Neural Networks. Refresh BIBREF8 is a reinforcement learning-based system trained by globally optimizing the ROUGE metric. More recent work achieves higher performance with more sophisticated model structures. Latent BIBREF17 frames extractive summarization as a latent variable inference problem; instead of maximizing the likelihood of “gold” standard labels, their latent model directly maximizes the likelihood of human summaries given selected sentences. Sumo BIBREF18 capitalizes on the notion of structured attention to induce a multi-root dependency tree representation of the document while predicting the output summary. NeuSum BIBREF19 scores and selects sentences jointly and represents the state of the art in extractive summarization."
],
[
"Neural approaches to abstractive summarization conceptualize the task as a sequence-to-sequence problem, where an encoder maps a sequence of tokens in the source document $\\mathbf {x} = [x_1, ..., x_n]$ to a sequence of continuous representations $\\mathbf {z} = [z_1, ..., z_n]$, and a decoder then generates the target summary $\\mathbf {y} = [y_1, ..., y_m]$ token-by-token, in an auto-regressive manner, hence modeling the conditional probability: $p(y_1, ..., y_m|x_1, ..., x_n)$.",
"BIBREF20 and BIBREF21 were among the first to apply the neural encoder-decoder architecture to text summarization. BIBREF6 enhance this model with a pointer-generator network (PTgen) which allows it to copy words from the source text, and a coverage mechanism (Cov) which keeps track of words that have been summarized. BIBREF11 propose an abstractive system where multiple agents (encoders) represent the document together with a hierarchical attention mechanism (over the agents) for decoding. Their Deep Communicating Agents (DCA) model is trained end-to-end with reinforcement learning. BIBREF9 also present a deep reinforced model (DRM) for abstractive summarization which handles the coverage problem with an intra-attention mechanism where the decoder attends over previously generated words. BIBREF4 follow a bottom-up approach (BottomUp); a content selector first determines which phrases in the source document should be part of the summary, and a copy mechanism is applied only to preselected phrases during decoding. BIBREF22 propose an abstractive model which is particularly suited to extreme summarization (i.e., single sentence summaries), based on convolutional neural networks and additionally conditioned on topic distributions (TConvS2S)."
],
[
"Although Bert has been used to fine-tune various NLP tasks, its application to summarization is not as straightforward. Since Bert is trained as a masked-language model, the output vectors are grounded to tokens instead of sentences, while in extractive summarization, most models manipulate sentence-level representations. Although segmentation embeddings represent different sentences in Bert, they only apply to sentence-pair inputs, while in summarization we must encode and manipulate multi-sentential inputs. Figure FIGREF2 illustrates our proposed Bert architecture for Summarization (which we call BertSum).",
"In order to represent individual sentences, we insert external [cls] tokens at the start of each sentence, and each [cls] symbol collects features for the sentence preceding it. We also use interval segment embeddings to distinguish multiple sentences within a document. For $sent_i$ we assign segment embedding $E_A$ or $E_B$ depending on whether $i$ is odd or even. For example, for document $[sent_1, sent_2, sent_3, sent_4, sent_5]$, we would assign embeddings $[E_A, E_B, E_A,E_B, E_A]$. This way, document representations are learned hierarchically where lower Transformer layers represent adjacent sentences, while higher layers, in combination with self-attention, represent multi-sentence discourse.",
"Position embeddings in the original Bert model have a maximum length of 512; we overcome this limitation by adding more position embeddings that are initialized randomly and fine-tuned with other parameters in the encoder."
],
[
"Let $d$ denote a document containing sentences $[sent_1, sent_2, \\cdots , sent_m]$, where $sent_i$ is the $i$-th sentence in the document. Extractive summarization can be defined as the task of assigning a label $y_i \\in \\lbrace 0, 1\\rbrace $ to each $sent_i$, indicating whether the sentence should be included in the summary. It is assumed that summary sentences represent the most important content of the document.",
"With BertSum, vector $t_i$ which is the vector of the $i$-th [cls] symbol from the top layer can be used as the representation for $sent_i$. Several inter-sentence Transformer layers are then stacked on top of Bert outputs, to capture document-level features for extracting summaries:",
"where $h^0=\\mathrm {PosEmb}(T)$; $T$ denotes the sentence vectors output by BertSum, and function $\\mathrm {PosEmb}$ adds sinusoid positional embeddings BIBREF3 to $T$, indicating the position of each sentence.",
"The final output layer is a sigmoid classifier:",
"where $h^L_i$ is the vector for $sent_i$ from the top layer (the $L$-th layer ) of the Transformer. In experiments, we implemented Transformers with $L=1, 2, 3$ and found that a Transformer with $L=2$ performed best. We name this model BertSumExt.",
"The loss of the model is the binary classification entropy of prediction $\\hat{y}_i$ against gold label $y_i$. Inter-sentence Transformer layers are jointly fine-tuned with BertSum. We use the Adam optimizer with $\\beta _1=0.9$, and $\\beta _2=0.999$). Our learning rate schedule follows BIBREF3 with warming-up ($ \\operatorname{\\operatorname{warmup}}=10,000$):"
],
[
"We use a standard encoder-decoder framework for abstractive summarization BIBREF6. The encoder is the pretrained BertSum and the decoder is a 6-layered Transformer initialized randomly. It is conceivable that there is a mismatch between the encoder and the decoder, since the former is pretrained while the latter must be trained from scratch. This can make fine-tuning unstable; for example, the encoder might overfit the data while the decoder underfits, or vice versa. To circumvent this, we design a new fine-tuning schedule which separates the optimizers of the encoder and the decoder.",
"We use two Adam optimizers with $\\beta _1=0.9$ and $\\beta _2=0.999$ for the encoder and the decoder, respectively, each with different warmup-steps and learning rates:",
"where $\\tilde{lr}_{\\mathcal {E}}=2e^{-3}$, and $\\operatorname{\\operatorname{warmup}}_{\\mathcal {E}}=20,000$ for the encoder and $\\tilde{lr}_{\\mathcal {D}}=0.1$, and $\\operatorname{\\operatorname{warmup}}_{\\mathcal {D}}=10,000$ for the decoder. This is based on the assumption that the pretrained encoder should be fine-tuned with a smaller learning rate and smoother decay (so that the encoder can be trained with more accurate gradients when the decoder is becoming stable).",
"In addition, we propose a two-stage fine-tuning approach, where we first fine-tune the encoder on the extractive summarization task (Section SECREF8) and then fine-tune it on the abstractive summarization task (Section SECREF13). Previous work BIBREF4, BIBREF23 suggests that using extractive objectives can boost the performance of abstractive summarization. Also notice that this two-stage approach is conceptually very simple, the model can take advantage of information shared between these two tasks, without fundamentally changing its architecture. We name the default abstractive model BertSumAbs and the two-stage fine-tuned model BertSumExtAbs."
],
[
"In this section, we describe the summarization datasets used in our experiments and discuss various implementation details."
],
[
"We evaluated our model on three benchmark datasets, namely the CNN/DailyMail news highlights dataset BIBREF24, the New York Times Annotated Corpus (NYT; BIBREF25), and XSum BIBREF22. These datasets represent different summary styles ranging from highlights to very brief one sentence summaries. The summaries also vary with respect to the type of rewriting operations they exemplify (e.g., some showcase more cut and paste operations while others are genuinely abstractive). Table TABREF12 presents statistics on these datasets (test set); example (gold-standard) summaries are provided in the supplementary material."
],
[
"contains news articles and associated highlights, i.e., a few bullet points giving a brief overview of the article. We used the standard splits of BIBREF24 for training, validation, and testing (90,266/1,220/1,093 CNN documents and 196,961/12,148/10,397 DailyMail documents). We did not anonymize entities. We first split sentences with the Stanford CoreNLP toolkit BIBREF26 and pre-processed the dataset following BIBREF6. Input documents were truncated to 512 tokens.",
""
],
[
"contains 110,540 articles with abstractive summaries. Following BIBREF27, we split these into 100,834/9,706 training/test examples, based on the date of publication (the test set contains all articles published from January 1, 2007 onward). We used 4,000 examples from the training as validation set. We also followed their filtering procedure, documents with summaries less than 50 words were removed from the dataset. The filtered test set (NYT50) includes 3,452 examples. Sentences were split with the Stanford CoreNLP toolkit BIBREF26 and pre-processed following BIBREF27. Input documents were truncated to 800 tokens.",
""
],
[
"contains 226,711 news articles accompanied with a one-sentence summary, answering the question “What is this article about?”. We used the splits of BIBREF22 for training, validation, and testing (204,045/11,332/11,334) and followed the pre-processing introduced in their work. Input documents were truncated to 512 tokens.",
"Aside from various statistics on the three datasets, Table TABREF12 also reports the proportion of novel bi-grams in gold summaries as a measure of their abstractiveness. We would expect models with extractive biases to perform better on datasets with (mostly) extractive summaries, and abstractive models to perform more rewrite operations on datasets with abstractive summaries. CNN/DailyMail and NYT are somewhat abstractive, while XSum is highly abstractive."
],
[
"For both extractive and abstractive settings, we used PyTorch, OpenNMT BIBREF28 and the `bert-base-uncased' version of Bert to implement BertSum. Both source and target texts were tokenized with Bert's subwords tokenizer."
],
[
"All extractive models were trained for 50,000 steps on 3 GPUs (GTX 1080 Ti) with gradient accumulation every two steps. Model checkpoints were saved and evaluated on the validation set every 1,000 steps. We selected the top-3 checkpoints based on the evaluation loss on the validation set, and report the averaged results on the test set. We used a greedy algorithm similar to BIBREF7 to obtain an oracle summary for each document to train extractive models. The algorithm generates an oracle consisting of multiple sentences which maximize the ROUGE-2 score against the gold summary.",
"When predicting summaries for a new document, we first use the model to obtain the score for each sentence. We then rank these sentences by their scores from highest to lowest, and select the top-3 sentences as the summary.",
"During sentence selection we use Trigram Blocking to reduce redundancy BIBREF9. Given summary $S$ and candidate sentence $c$, we skip $c$ if there exists a trigram overlapping between $c$ and $S$. The intuition is similar to Maximal Marginal Relevance (MMR; BIBREF29); we wish to minimize the similarity between the sentence being considered and sentences which have been already selected as part of the summary."
],
[
"In all abstractive models, we applied dropout (with probability $0.1$) before all linear layers; label smoothing BIBREF30 with smoothing factor $0.1$ was also used. Our Transformer decoder has 768 hidden units and the hidden size for all feed-forward layers is 2,048. All models were trained for 200,000 steps on 4 GPUs (GTX 1080 Ti) with gradient accumulation every five steps. Model checkpoints were saved and evaluated on the validation set every 2,500 steps. We selected the top-3 checkpoints based on their evaluation loss on the validation set, and report the averaged results on the test set.",
"During decoding we used beam search (size 5), and tuned the $\\alpha $ for the length penalty BIBREF31 between $0.6$ and 1 on the validation set; we decode until an end-of-sequence token is emitted and repeated trigrams are blocked BIBREF9. It is worth noting that our decoder applies neither a copy nor a coverage mechanism BIBREF6, despite their popularity in abstractive summarization. This is mainly because we focus on building a minimum-requirements model and these mechanisms may introduce additional hyper-parameters to tune. Thanks to the subwords tokenizer, we also rarely observe issues with out-of-vocabulary words in the output; moreover, trigram-blocking produces diverse summaries managing to reduce repetitions."
],
[
"We evaluated summarization quality automatically using ROUGE BIBREF32. We report unigram and bigram overlap (ROUGE-1 and ROUGE-2) as a means of assessing informativeness and the longest common subsequence (ROUGE-L) as a means of assessing fluency. Table TABREF23 summarizes our results on the CNN/DailyMail dataset. The first block in the table includes the results of an extractive Oracle system as an upper bound. We also present the Lead-3 baseline (which simply selects the first three sentences in a document). The second block in the table includes various extractive models trained on the CNN/DailyMail dataset (see Section SECREF5 for an overview). For comparison to our own model, we also implemented a non-pretrained Transformer baseline (TransformerExt) which uses the same architecture as BertSumExt, but with fewer parameters. It is randomly initialized and only trained on the summarization task. TransformerExt has 6 layers, the hidden size is 512, and the feed-forward filter size is 2,048. The model was trained with same settings as in BIBREF3. The third block in Table TABREF23 highlights the performance of several abstractive models on the CNN/DailyMail dataset (see Section SECREF6 for an overview). We also include an abstractive Transformer baseline (TransformerAbs) which has the same decoder as our abstractive BertSum models; the encoder is a 6-layer Transformer with 768 hidden size and 2,048 feed-forward filter size. The fourth block reports results with fine-tuned Bert models: BertSumExt and its two variants (one without interval embeddings, and one with the large version of Bert), BertSumAbs, and BertSumExtAbs. Bert-based models outperform the Lead-3 baseline which is not a strawman; on the CNN/DailyMail corpus it is indeed superior to several extractive BIBREF7, BIBREF8, BIBREF19 and abstractive models BIBREF6. Bert models collectively outperform all previously proposed extractive and abstractive systems, only falling behind the Oracle upper bound. Among Bert variants, BertSumExt performs best which is not entirely surprising; CNN/DailyMail summaries are somewhat extractive and even abstractive models are prone to copying sentences from the source document when trained on this dataset BIBREF6. Perhaps unsurprisingly we observe that larger versions of Bert lead to performance improvements and that interval embeddings bring only slight gains. Table TABREF24 presents results on the NYT dataset. Following the evaluation protocol in BIBREF27, we use limited-length ROUGE Recall, where predicted summaries are truncated to the length of the gold summaries. Again, we report the performance of the Oracle upper bound and Lead-3 baseline. The second block in the table contains previously proposed extractive models as well as our own Transformer baseline. Compress BIBREF27 is an ILP-based model which combines compression and anaphoricity constraints. The third block includes abstractive models from the literature, and our Transformer baseline. Bert-based models are shown in the fourth block. Again, we observe that they outperform previously proposed approaches. On this dataset, abstractive Bert models generally perform better compared to BertSumExt, almost approaching Oracle performance.",
"Table TABREF26 summarizes our results on the XSum dataset. Recall that summaries in this dataset are highly abstractive (see Table TABREF12) consisting of a single sentence conveying the gist of the document. Extractive models here perform poorly as corroborated by the low performance of the Lead baseline (which simply selects the leading sentence from the document), and the Oracle (which selects a single-best sentence in each document) in Table TABREF26. As a result, we do not report results for extractive models on this dataset. The second block in Table TABREF26 presents the results of various abstractive models taken from BIBREF22 and also includes our own abstractive Transformer baseline. In the third block we show the results of our Bert summarizers which again are superior to all previously reported models (by a wide margin)."
],
[
"Recall that our abstractive model uses separate optimizers for the encoder and decoder. In Table TABREF27 we examine whether the combination of different learning rates ($\\tilde{lr}_{\\mathcal {E}}$ and $\\tilde{lr}_{\\mathcal {D}}$) is indeed beneficial. Specifically, we report model perplexity on the CNN/DailyMail validation set for varying encoder/decoder learning rates. We can see that the model performs best with $\\tilde{lr}_{\\mathcal {E}}=2e-3$ and $\\tilde{lr}_{\\mathcal {D}}=0.1$."
],
[
"In addition to the evaluation based on ROUGE, we also analyzed in more detail the summaries produced by our model. For the extractive setting, we looked at the position (in the source document) of the sentences which were selected to appear in the summary. Figure FIGREF31 shows the proportion of selected summary sentences which appear in the source document at positions 1, 2, and so on. The analysis was conducted on the CNN/DailyMail dataset for Oracle summaries, and those produced by BertSumExt and the TransformerExt. We can see that Oracle summary sentences are fairly smoothly distributed across documents, while summaries created by TransformerExt mostly concentrate on the first document sentences. BertSumExt outputs are more similar to Oracle summaries, indicating that with the pretrained encoder, the model relies less on shallow position features, and learns deeper document representations."
],
[
"We also analyzed the output of abstractive systems by calculating the proportion of novel n-grams that appear in the summaries but not in the source texts. The results are shown in Figure FIGREF33. In the CNN/DailyMail dataset, the proportion of novel n-grams in automatically generated summaries is much lower compared to reference summaries, but in XSum, this gap is much smaller. We also observe that on CNN/DailyMail, BertExtAbs produces less novel n-ngrams than BertAbs, which is not surprising. BertExtAbs is more biased towards selecting sentences from the source document since it is initially trained as an extractive model. The supplementary material includes examples of system output and additional ablation studies."
],
[
"In addition to automatic evaluation, we also evaluated system output by eliciting human judgments. We report experiments following a question-answering (QA) paradigm BIBREF33, BIBREF8 which quantifies the degree to which summarization models retain key information from the document. Under this paradigm, a set of questions is created based on the gold summary under the assumption that it highlights the most important document content. Participants are then asked to answer these questions by reading system summaries alone without access to the article. The more questions a system can answer, the better it is at summarizing the document as a whole. Moreover, we also assessed the overall quality of the summaries produced by abstractive systems which due to their ability to rewrite content may produce disfluent or ungrammatical output. Specifically, we followed the Best-Worst Scaling BIBREF34 method where participants were presented with the output of two systems (and the original document) and asked to decide which one was better according to the criteria of Informativeness, Fluency, and Succinctness.",
"Both types of evaluation were conducted on the Amazon Mechanical Turk platform. For the CNN/DailyMail and NYT datasets we used the same documents (20 in total) and questions from previous work BIBREF8, BIBREF18. For XSum, we randomly selected 20 documents (and their questions) from the release of BIBREF22. We elicited 3 responses per HIT. With regard to QA evaluation, we adopted the scoring mechanism from BIBREF33; correct answers were marked with a score of one, partially correct answers with 0.5, and zero otherwise. For quality-based evaluation, the rating of each system was computed as the percentage of times it was chosen as better minus the times it was selected as worse. Ratings thus range from -1 (worst) to 1 (best).",
"Results for extractive and abstractive systems are shown in Tables TABREF37 and TABREF38, respectively. We compared the best performing BertSum model in each setting (extractive or abstractive) against various state-of-the-art systems (whose output is publicly available), the Lead baseline, and the Gold standard as an upper bound. As shown in both tables participants overwhelmingly prefer the output of our model against comparison systems across datasets and evaluation paradigms. All differences between BertSum and comparison models are statistically significant ($p<0.05$), with the exception of TConvS2S (see Table TABREF38; XSum) in the QA evaluation setting."
],
[
"In this paper, we showcased how pretrained Bert can be usefully applied in text summarization. We introduced a novel document-level encoder and proposed a general framework for both abstractive and extractive summarization. Experimental results across three datasets show that our model achieves state-of-the-art results across the board under automatic and human-based evaluation protocols. Although we mainly focused on document encoding for summarization, in the future, we would like to take advantage the capabilities of Bert for language generation."
],
[
"This research is supported by a Google PhD Fellowship to the first author. We gratefully acknowledge the support of the European Research Council (Lapata, award number 681760, “Translating Multiple Modalities into Text”). We would also like to thank Shashi Narayan for providing us with the XSum dataset."
]
],
"section_name": [
"Introduction",
"Background ::: Pretrained Language Models",
"Background ::: Extractive Summarization",
"Background ::: Abstractive Summarization",
"Fine-tuning Bert for Summarization ::: Summarization Encoder",
"Fine-tuning Bert for Summarization ::: Extractive Summarization",
"Fine-tuning Bert for Summarization ::: Abstractive Summarization",
"Experimental Setup",
"Experimental Setup ::: Summarization Datasets",
"Experimental Setup ::: Summarization Datasets ::: CNN/DailyMail",
"Experimental Setup ::: Summarization Datasets ::: NYT",
"Experimental Setup ::: Summarization Datasets ::: XSum",
"Experimental Setup ::: Implementation Details",
"Experimental Setup ::: Implementation Details ::: Extractive Summarization",
"Experimental Setup ::: Implementation Details ::: Abstractive Summarization",
"Results ::: Automatic Evaluation",
"Results ::: Model Analysis ::: Learning Rates",
"Results ::: Model Analysis ::: Position of Extracted Sentences",
"Results ::: Model Analysis ::: Novel N-grams",
"Results ::: Human Evaluation",
"Conclusions",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"5d394a19d4bfe16144fe8a759b206459bcd9393c"
],
"answer": [
{
"evidence": [
"In order to represent individual sentences, we insert external [cls] tokens at the start of each sentence, and each [cls] symbol collects features for the sentence preceding it. We also use interval segment embeddings to distinguish multiple sentences within a document. For $sent_i$ we assign segment embedding $E_A$ or $E_B$ depending on whether $i$ is odd or even. For example, for document $[sent_1, sent_2, sent_3, sent_4, sent_5]$, we would assign embeddings $[E_A, E_B, E_A,E_B, E_A]$. This way, document representations are learned hierarchically where lower Transformer layers represent adjacent sentences, while higher layers, in combination with self-attention, represent multi-sentence discourse.",
"Position embeddings in the original Bert model have a maximum length of 512; we overcome this limitation by adding more position embeddings that are initialized randomly and fine-tuned with other parameters in the encoder."
],
"extractive_spans": [
"Bert model have a maximum length of 512; we overcome this limitation by adding more position embeddings",
"we insert external [cls] tokens at the start of each sentence, and each [cls] symbol collects features for the sentence preceding it",
"document representations are learned hierarchically"
],
"free_form_answer": "",
"highlighted_evidence": [
"In order to represent individual sentences, we insert external [cls] tokens at the start of each sentence, and each [cls] symbol collects features for the sentence preceding it. We also use interval segment embeddings to distinguish multiple sentences within a document.",
"This way, document representations are learned hierarchically where lower Transformer layers represent adjacent sentences, while higher layers, in combination with self-attention, represent multi-sentence discourse.",
"Position embeddings in the original Bert model have a maximum length of 512; we overcome this limitation by adding more position embeddings that are initialized randomly and fine-tuned with other parameters in the encoder."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"87c2036fba3142e46d20f6ad2b4061d9e272529b",
"ec2791c9eb27bc4e90e3e6d6865413ee9ebc8df0"
],
"answer": [
{
"evidence": [
"We evaluated summarization quality automatically using ROUGE BIBREF32. We report unigram and bigram overlap (ROUGE-1 and ROUGE-2) as a means of assessing informativeness and the longest common subsequence (ROUGE-L) as a means of assessing fluency. Table TABREF23 summarizes our results on the CNN/DailyMail dataset. The first block in the table includes the results of an extractive Oracle system as an upper bound. We also present the Lead-3 baseline (which simply selects the first three sentences in a document). The second block in the table includes various extractive models trained on the CNN/DailyMail dataset (see Section SECREF5 for an overview). For comparison to our own model, we also implemented a non-pretrained Transformer baseline (TransformerExt) which uses the same architecture as BertSumExt, but with fewer parameters. It is randomly initialized and only trained on the summarization task. TransformerExt has 6 layers, the hidden size is 512, and the feed-forward filter size is 2,048. The model was trained with same settings as in BIBREF3. The third block in Table TABREF23 highlights the performance of several abstractive models on the CNN/DailyMail dataset (see Section SECREF6 for an overview). We also include an abstractive Transformer baseline (TransformerAbs) which has the same decoder as our abstractive BertSum models; the encoder is a 6-layer Transformer with 768 hidden size and 2,048 feed-forward filter size. The fourth block reports results with fine-tuned Bert models: BertSumExt and its two variants (one without interval embeddings, and one with the large version of Bert), BertSumAbs, and BertSumExtAbs. Bert-based models outperform the Lead-3 baseline which is not a strawman; on the CNN/DailyMail corpus it is indeed superior to several extractive BIBREF7, BIBREF8, BIBREF19 and abstractive models BIBREF6. Bert models collectively outperform all previously proposed extractive and abstractive systems, only falling behind the Oracle upper bound. Among Bert variants, BertSumExt performs best which is not entirely surprising; CNN/DailyMail summaries are somewhat extractive and even abstractive models are prone to copying sentences from the source document when trained on this dataset BIBREF6. Perhaps unsurprisingly we observe that larger versions of Bert lead to performance improvements and that interval embeddings bring only slight gains. Table TABREF24 presents results on the NYT dataset. Following the evaluation protocol in BIBREF27, we use limited-length ROUGE Recall, where predicted summaries are truncated to the length of the gold summaries. Again, we report the performance of the Oracle upper bound and Lead-3 baseline. The second block in the table contains previously proposed extractive models as well as our own Transformer baseline. Compress BIBREF27 is an ILP-based model which combines compression and anaphoricity constraints. The third block includes abstractive models from the literature, and our Transformer baseline. Bert-based models are shown in the fourth block. Again, we observe that they outperform previously proposed approaches. On this dataset, abstractive Bert models generally perform better compared to BertSumExt, almost approaching Oracle performance.",
"Table TABREF26 summarizes our results on the XSum dataset. Recall that summaries in this dataset are highly abstractive (see Table TABREF12) consisting of a single sentence conveying the gist of the document. Extractive models here perform poorly as corroborated by the low performance of the Lead baseline (which simply selects the leading sentence from the document), and the Oracle (which selects a single-best sentence in each document) in Table TABREF26. As a result, we do not report results for extractive models on this dataset. The second block in Table TABREF26 presents the results of various abstractive models taken from BIBREF22 and also includes our own abstractive Transformer baseline. In the third block we show the results of our Bert summarizers which again are superior to all previously reported models (by a wide margin).",
"FLOAT SELECTED: Table 2: ROUGE F1 results on CNN/DailyMail test set (R1 and R2 are shorthands for unigram and bigram overlap; RL is the longest common subsequence). Results for comparison systems are taken from the authors’ respective papers or obtained on our data by running publicly released software.",
"FLOAT SELECTED: Table 4: ROUGE F1 results on the XSum test set. Results for comparison systems are taken from the authors’ respective papers or obtained on our data by running publicly released software.",
"FLOAT SELECTED: Table 3: ROUGE Recall results on NYT test set. Results for comparison systems are taken from the authors’ respective papers or obtained on our data by running publicly released software. Table cells are filled with — whenever results are not available."
],
"extractive_spans": [],
"free_form_answer": "Best results on unigram:\nCNN/Daily Mail: Rogue F1 43.85\nNYT: Rogue Recall 49.02\nXSum: Rogue F1 38.81",
"highlighted_evidence": [
"The third block in Table TABREF23 highlights the performance of several abstractive models on the CNN/DailyMail dataset (see Section SECREF6 for an overview).",
"Table TABREF26 summarizes our results on the XSum dataset.",
"Table TABREF24 presents results on the NYT dataset.",
"FLOAT SELECTED: Table 2: ROUGE F1 results on CNN/DailyMail test set (R1 and R2 are shorthands for unigram and bigram overlap; RL is the longest common subsequence). Results for comparison systems are taken from the authors’ respective papers or obtained on our data by running publicly released software.",
"FLOAT SELECTED: Table 4: ROUGE F1 results on the XSum test set. Results for comparison systems are taken from the authors’ respective papers or obtained on our data by running publicly released software.",
"FLOAT SELECTED: Table 3: ROUGE Recall results on NYT test set. Results for comparison systems are taken from the authors’ respective papers or obtained on our data by running publicly released software. Table cells are filled with — whenever results are not available."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 2: ROUGE F1 results on CNN/DailyMail test set (R1 and R2 are shorthands for unigram and bigram overlap; RL is the longest common subsequence). Results for comparison systems are taken from the authors’ respective papers or obtained on our data by running publicly released software.",
"FLOAT SELECTED: Table 4: ROUGE F1 results on the XSum test set. Results for comparison systems are taken from the authors’ respective papers or obtained on our data by running publicly released software."
],
"extractive_spans": [],
"free_form_answer": "Highest scores for ROUGE-1, ROUGE-2 and ROUGE-L on CNN/DailyMail test set are 43.85, 20.34 and 39.90 respectively; on the XSum test set 38.81, 16.50 and 31.27 and on the NYT test set 49.02, 31.02 and 45.55",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: ROUGE F1 results on CNN/DailyMail test set (R1 and R2 are shorthands for unigram and bigram overlap; RL is the longest common subsequence). Results for comparison systems are taken from the authors’ respective papers or obtained on our data by running publicly released software.",
"FLOAT SELECTED: Table 4: ROUGE F1 results on the XSum test set. Results for comparison systems are taken from the authors’ respective papers or obtained on our data by running publicly released software."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"c77aaa64e6082205b204e74d4ceff8fc3f3fdf8e",
"d20a5664e55b07cfd497b60c5236f984db73b3f0"
],
"answer": [
{
"evidence": [
"We evaluated our model on three benchmark datasets, namely the CNN/DailyMail news highlights dataset BIBREF24, the New York Times Annotated Corpus (NYT; BIBREF25), and XSum BIBREF22. These datasets represent different summary styles ranging from highlights to very brief one sentence summaries. The summaries also vary with respect to the type of rewriting operations they exemplify (e.g., some showcase more cut and paste operations while others are genuinely abstractive). Table TABREF12 presents statistics on these datasets (test set); example (gold-standard) summaries are provided in the supplementary material."
],
"extractive_spans": [
"CNN/DailyMail news highlights",
"New York Times Annotated Corpus",
"XSum"
],
"free_form_answer": "",
"highlighted_evidence": [
"We evaluated our model on three benchmark datasets, namely the CNN/DailyMail news highlights dataset BIBREF24, the New York Times Annotated Corpus (NYT; BIBREF25), and XSum BIBREF22."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We evaluated our model on three benchmark datasets, namely the CNN/DailyMail news highlights dataset BIBREF24, the New York Times Annotated Corpus (NYT; BIBREF25), and XSum BIBREF22. These datasets represent different summary styles ranging from highlights to very brief one sentence summaries. The summaries also vary with respect to the type of rewriting operations they exemplify (e.g., some showcase more cut and paste operations while others are genuinely abstractive). Table TABREF12 presents statistics on these datasets (test set); example (gold-standard) summaries are provided in the supplementary material."
],
"extractive_spans": [
"the CNN/DailyMail news highlights dataset BIBREF24",
"the New York Times Annotated Corpus (NYT; BIBREF25)",
"XSum BIBREF22"
],
"free_form_answer": "",
"highlighted_evidence": [
"We evaluated our model on three benchmark datasets, namely the CNN/DailyMail news highlights dataset BIBREF24, the New York Times Annotated Corpus (NYT; BIBREF25), and XSum BIBREF22."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"What is novel about their document-level encoder?",
"What rouge score do they achieve?",
"What are the datasets used for evaluation?"
],
"question_id": [
"97d1ac71eed13d4f51f29aac0e1a554007907df8",
"c17b609b0b090d7e8f99de1445be04f8f66367d4",
"53014cfb506f6fffb22577bf580ae6f4d5317ce5"
],
"question_writer": [
"798ee385d7c8105b83b032c7acc2347588e09d61",
"798ee385d7c8105b83b032c7acc2347588e09d61",
"798ee385d7c8105b83b032c7acc2347588e09d61"
],
"search_query": [
"summarization",
"summarization",
"summarization"
],
"topic_background": [
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 1: Architecture of the original BERT model (left) and BERTSUM (right). The sequence on top is the input document, followed by the summation of three kinds of embeddings for each token. The summed vectors are used as input embeddings to several bidirectional Transformer layers, generating contextual vectors for each token. BERTSUM extends BERT by inserting multiple [CLS] symbols to learn sentence representations and using interval segmentation embeddings (illustrated in red and green color) to distinguish multiple sentences.",
"Table 1: Comparison of summarization datasets: size of training, validation, and test sets and average document and summary length (in terms of words and sentences). The proportion of novel bi-grams that do not appear in source documents but do appear in the gold summaries quantifies corpus bias towards extractive methods.",
"Table 2: ROUGE F1 results on CNN/DailyMail test set (R1 and R2 are shorthands for unigram and bigram overlap; RL is the longest common subsequence). Results for comparison systems are taken from the authors’ respective papers or obtained on our data by running publicly released software.",
"Table 4: ROUGE F1 results on the XSum test set. Results for comparison systems are taken from the authors’ respective papers or obtained on our data by running publicly released software.",
"Table 3: ROUGE Recall results on NYT test set. Results for comparison systems are taken from the authors’ respective papers or obtained on our data by running publicly released software. Table cells are filled with — whenever results are not available.",
"Table 5: Model perplexity (CNN/DailyMail; validation set) under different combinations of encoder and decoder learning rates.",
"Figure 2: Proportion of extracted sentences according to their position in the original document.",
"Figure 3: Proportion of novel n-grams in model generated summaries.",
"Table 7: QA-based and ranking-based evaluation. Models with † are significantly different from BERTSUM (using a paired student t-test; p < 0.05). Table cells are filled with — whenever system output is not available. GOLD is not used in QA setting, and LEAD is not used in Rank evaluation.",
"Table 6: QA-based evaluation. Models with † are significantly different from BERTSUM (using a paired student t-test; p < 0.05). Table cells are filled with — whenever system output is not available."
],
"file": [
"3-Figure1-1.png",
"5-Table1-1.png",
"6-Table2-1.png",
"7-Table4-1.png",
"7-Table3-1.png",
"8-Table5-1.png",
"8-Figure2-1.png",
"9-Figure3-1.png",
"9-Table7-1.png",
"9-Table6-1.png"
]
} | [
"What rouge score do they achieve?"
] | [
[
"1908.08345-6-Table2-1.png",
"1908.08345-7-Table3-1.png",
"1908.08345-7-Table4-1.png",
"1908.08345-Results ::: Automatic Evaluation-0",
"1908.08345-Results ::: Automatic Evaluation-1"
]
] | [
"Highest scores for ROUGE-1, ROUGE-2 and ROUGE-L on CNN/DailyMail test set are 43.85, 20.34 and 39.90 respectively; on the XSum test set 38.81, 16.50 and 31.27 and on the NYT test set 49.02, 31.02 and 45.55"
] | 202 |
1611.02988 | Distant supervision for emotion detection using Facebook reactions | We exploit the Facebook reaction feature in a distant supervised fashion to train a support vector machine classifier for emotion detection, using several feature combinations and combining different Facebook pages. We test our models on existing benchmarks for emotion detection and show that employing only information that is derived completely automatically, thus without relying on any handcrafted lexicon as it's usually done, we can achieve competitive results. The results also show that there is large room for improvement, especially by gearing the collection of Facebook pages, with a view to the target domain. | {
"paragraphs": [
[
"This work is licenced under a Creative Commons Attribution 4.0 International Licence. Licence details: http://creativecommons.org/licenses/by/4.0/",
"In the spirit of the brevity of social media's messages and reactions, people have got used to express feelings minimally and symbolically, as with hashtags on Twitter and Instagram. On Facebook, people tend to be more wordy, but posts normally receive more simple “likes” than longer comments. Since February 2016, Facebook users can express specific emotions in response to a post thanks to the newly introduced reaction feature (see Section SECREF2 ), so that now a post can be wordlessly marked with an expression of say “joy\" or “surprise\" rather than a generic “like”.",
"It has been observed that this new feature helps Facebook to know much more about their users and exploit this information for targeted advertising BIBREF0 , but interest in people's opinions and how they feel isn't limited to commercial reasons, as it invests social monitoring, too, including health care and education BIBREF1 . However, emotions and opinions are not always expressed this explicitly, so that there is high interest in developing systems towards their automatic detection. Creating manually annotated datasets large enough to train supervised models is not only costly, but also—especially in the case of opinions and emotions—difficult, due to the intrinsic subjectivity of the task BIBREF2 , BIBREF3 . Therefore, research has focused on unsupervised methods enriched with information derived from lexica, which are manually created BIBREF3 , BIBREF4 . Since go2009twitter have shown that happy and sad emoticons can be successfully used as signals for sentiment labels, distant supervision, i.e. using some reasonably safe signals as proxies for automatically labelling training data BIBREF5 , has been used also for emotion recognition, for example exploiting both emoticons and Twitter hashtags BIBREF6 , but mainly towards creating emotion lexica. mohammad2015using use hashtags, experimenting also with highly fine-grained emotion sets (up to almost 600 emotion labels), to create the large Hashtag Emotion Lexicon. Emoticons are used as proxies also by hallsmarmulti, who use distributed vector representations to find which words are interchangeable with emoticons but also which emoticons are used in a similar context.",
"We take advantage of distant supervision by using Facebook reactions as proxies for emotion labels, which to the best of our knowledge hasn't been done yet, and we train a set of Support Vector Machine models for emotion recognition. Our models, differently from existing ones, exploit information which is acquired entirely automatically, and achieve competitive or even state-of-the-art results for some of the emotion labels on existing, standard evaluation datasets. For explanatory purposes, related work is discussed further and more in detail when we describe the benchmarks for evaluation (Section SECREF3 ) and when we compare our models to existing ones (Section SECREF5 ). We also explore and discuss how choosing different sets of Facebook pages as training data provides an intrinsic domain-adaptation method."
],
[
"For years, on Facebook people could leave comments to posts, and also “like” them, by using a thumbs-up feature to explicitly express a generic, rather underspecified, approval. A “like” could thus mean “I like what you said\", but also “I like that you bring up such topic (though I find the content of the article you linked annoying)\".",
"In February 2016, after a short trial, Facebook made a more explicit reaction feature available world-wide. Rather than allowing for the underspecified “like” as the only wordless response to a post, a set of six more specific reactions was introduced, as shown in Figure FIGREF1 : Like, Love, Haha, Wow, Sad and Angry. We use such reactions as proxies for emotion labels associated to posts.",
"We collected Facebook posts and their corresponding reactions from public pages using the Facebook API, which we accessed via the Facebook-sdk python library. We chose different pages (and therefore domains and stances), aiming at a balanced and varied dataset, but we did so mainly based on intuition (see Section SECREF4 ) and with an eye to the nature of the datasets available for evaluation (see Section SECREF5 ). The choice of which pages to select posts from is far from trivial, and we believe this is actually an interesting aspect of our approach, as by using different Facebook pages one can intrinsically tackle the domain-adaptation problem (See Section SECREF6 for further discussion on this). The final collection of Facebook pages for the experiments described in this paper is as follows: FoxNews, CNN, ESPN, New York Times, Time magazine, Huffington Post Weird News, The Guardian, Cartoon Network, Cooking Light, Home Cooking Adventure, Justin Bieber, Nickelodeon, Spongebob, Disney.",
"Note that thankful was only available during specific time spans related to certain events, as Mother's Day in May 2016.",
"For each page, we downloaded the latest 1000 posts, or the maximum available if there are fewer, from February 2016, retrieving the counts of reactions for each post. The output is a JSON file containing a list of dictionaries with a timestamp, the post and a reaction vector with frequency values, which indicate how many users used that reaction in response to the post (Figure FIGREF3 ). The resulting emotion vectors must then be turned into an emotion label.",
"In the context of this experiment, we made the simple decision of associating to each post the emotion with the highest count, ignoring like as it is the default and most generic reaction people tend to use. Therefore, for example, to the first post in Figure FIGREF3 , we would associate the label sad, as it has the highest score (284) among the meaningful emotions we consider, though it also has non-zero scores for other emotions. At this stage, we didn't perform any other entropy-based selection of posts, to be investigated in future work."
],
[
"Three datasets annotated with emotions are commonly used for the development and evaluation of emotion detection systems, namely the Affective Text dataset, the Fairy Tales dataset, and the ISEAR dataset. In order to compare our performance to state-of-the-art results, we have used them as well. In this Section, in addition to a description of each dataset, we provide an overview of the emotions used, their distribution, and how we mapped them to those we obtained from Facebook posts in Section SECREF7 . A summary is provided in Table TABREF8 , which also shows, in the bottom row, what role each dataset has in our experiments: apart from the development portion of the Affective Text, which we used to develop our models (Section SECREF4 ), all three have been used as benchmarks for our evaluation."
],
[
"Task 14 at SemEval 2007 BIBREF7 was concerned with the classification of emotions and valence in news headlines. The headlines where collected from several news websites including Google news, The New York Times, BBC News and CNN. The used emotion labels were Anger, Disgust, Fear, Joy, Sadness, Surprise, in line with the six basic emotions of Ekman's standard model BIBREF8 . Valence was to be determined as positive or negative. Classification of emotion and valence were treated as separate tasks. Emotion labels were not considered as mututally exclusive, and each emotion was assigned a score from 0 to 100. Training/developing data amounted to 250 annotated headlines (Affective development), while systems were evaluated on another 1000 (Affective test). Evaluation was done using two different methods: a fine-grained evaluation using Pearson's r to measure the correlation between the system scores and the gold standard; and a coarse-grained method where each emotion score was converted to a binary label, and precision, recall, and f-score were computed to assess performance. As it is done in most works that use this dataset BIBREF3 , BIBREF4 , BIBREF9 , we also treat this as a classification problem (coarse-grained). This dataset has been extensively used for the evaluation of various unsupervised methods BIBREF2 , but also for testing different supervised learning techniques and feature portability BIBREF10 ."
],
[
"This is a dataset collected by alm2008affect, where about 1,000 sentences from fairy tales (by B. Potter, H.C. Andersen and Grimm) were annotated with the same six emotions of the Affective Text dataset, though with different names: Angry, Disgusted, Fearful, Happy, Sad, and Surprised. In most works that use this dataset BIBREF3 , BIBREF4 , BIBREF9 , only sentences where all annotators agreed are used, and the labels angry and disgusted are merged. We adopt the same choices."
],
[
"The ISEAR (International Survey on Emotion Antecedents and Reactions BIBREF11 , BIBREF12 ) is a dataset created in the context of a psychology project of the 1990s, by collecting questionnaires answered by people with different cultural backgrounds. The main aim of this project was to gather insights in cross-cultural aspects of emotional reactions. Student respondents, both psychologists and non-psychologists, were asked to report situations in which they had experienced all of seven major emotions (joy, fear, anger, sadness, disgust, shame and guilt). In each case, the questions covered the way they had appraised a given situation and how they reacted. The final dataset contains reports by approximately 3000 respondents from all over the world, for a total of 7665 sentences labelled with an emotion, making this the largest dataset out of the three we use."
],
[
"We summarise datasets and emotion distribution from two viewpoints. First, because there are different sets of emotions labels in the datasets and Facebook data, we need to provide a mapping and derive a subset of emotions that we are going to use for the experiments. This is shown in Table TABREF8 , where in the “Mapped” column we report the final emotions we use in this paper: anger, joy, sadness, surprise. All labels in each dataset are mapped to these final emotions, which are therefore the labels we use for training and testing our models.",
"Second, the distribution of the emotions for each dataset is different, as can be seen in Figure FIGREF9 .",
"In Figure FIGREF9 we also provide the distribution of the emotions anger, joy, sadness, surprise per Facebook page, in terms of number of posts (recall that we assign to a post the label corresponding to the majority emotion associated to it, see Section SECREF2 ). We can observe that for example pages about news tend to have more sadness and anger posts, while pages about cooking and tv-shows have a high percentage of joy posts. We will use this information to find the best set of pages for a given target domain (see Section SECREF5 ).",
""
],
[
"There are two main decisions to be taken in developing our model: (i) which Facebook pages to select as training data, and (ii) which features to use to train the model, which we discuss below. Specifically, we first set on a subset of pages and then experiment with features. Further exploration of the interaction between choice of pages and choice of features is left to future work, and partly discussed in Section SECREF6 . For development, we use a small portion of the Affective data set described in Section SECREF4 , that is the portion that had been released as development set for SemEval's 2007 Task 14 BIBREF7 , which contains 250 annotated sentences (Affective development, Section SECREF4 ). All results reported in this section are on this dataset. The test set of Task 14 as well as the other two datasets described in Section SECREF3 will be used to evaluate the final models (Section SECREF4 )."
],
[
"Although page selection is a crucial ingredient of this approach, which we believe calls for further and deeper, dedicated investigation, for the experiments described here we took a rather simple approach. First, we selected the pages that would provide training data based on intuition and availability, then chose different combinations according to results of a basic model run on development data, and eventually tested feature combinations, still on the development set.",
"For the sake of simplicity and transparency, we first trained an SVM with a simple bag-of-words model and default parameters as per the Scikit-learn implementation BIBREF13 on different combinations of pages. Based on results of the attempted combinations as well as on the distribution of emotions in the development dataset (Figure FIGREF9 ), we selected a best model (B-M), namely the combined set of Time, The Guardian and Disney, which yields the highest results on development data. Time and The Guardian perform well on most emotions but Disney helps to boost the performance for the Joy class."
],
[
"In selecting appropriate features, we mainly relied on previous work and intuition. We experimented with different combinations, and all tests were still done on Affective development, using the pages for the best model (B-M) described above as training data. Results are in Table TABREF20 . Future work will further explore the simultaneous selection of features and page combinations.",
"We use a set of basic text-based features to capture the emotion class. These include a tf-idf bag-of-words feature, word (2-3) and character (2-5) ngrams, and features related to the presence of negation words, and to the usage of punctuation.",
"This feature is used in all unsupervised models as a source of information, and we mainly include it to assess its contribution, but eventually do not use it in our final model.",
"We used the NRC10 Lexicon because it performed best in the experiments by BIBREF10 , which is built around the emotions anger, anticipation, disgust, fear, joy, sadness, and surprise, and the valence values positive and negative. For each word in the lexicon, a boolean value indicating presence or absence is associated to each emotion. For a whole sentence, a global score per emotion can be obtained by summing the vectors for all content words of that sentence included in the lexicon, and used as feature.",
"As additional feature, we also included Word Embeddings, namely distributed representations of words in a vector space, which have been exceptionally successful in boosting performance in a plethora of NLP tasks. We use three different embeddings:",
"Google embeddings: pre-trained embeddings trained on Google News and obtained with the skip-gram architecture described in BIBREF14 . This model contains 300-dimensional vectors for 3 million words and phrases.",
"Facebook embeddings: embeddings that we trained on our scraped Facebook pages for a total of 20,000 sentences. Using the gensim library BIBREF15 , we trained the embeddings with the following parameters: window size of 5, learning rate of 0.01 and dimensionality of 100. We filtered out words with frequency lower than 2 occurrences.",
"Retrofitted embeddings: Retrofitting BIBREF16 has been shown as a simple but efficient way of informing trained embeddings with additional information derived from some lexical resource, rather than including it directly at the training stage, as it's done for example to create sense-aware BIBREF17 or sentiment-aware BIBREF18 embeddings. In this work, we retrofit general embeddings to include information about emotions, so that emotion-similar words can get closer in space. Both the Google as well as our Facebook embeddings were retrofitted with lexical information obtained from the NRC10 Lexicon mentioned above, which provides emotion-similarity for each token. Note that differently from the previous two types of embeddings, the retrofitted ones do rely on handcrafted information in the form of a lexical resource."
],
[
"We report precision, recall, and f-score on the development set. The average f-score is reported as micro-average, to better account for the skewed distribution of the classes as well as in accordance to what is usually reported for this task BIBREF19 .",
"From Table TABREF20 we draw three main observations. First, a simple tf-idf bag-of-word mode works already very well, to the point that the other textual and lexicon-based features don't seem to contribute to the overall f-score (0.368), although there is a rather substantial variation of scores per class. Second, Google embeddings perform a lot better than Facebook embeddings, and this is likely due to the size of the corpus used for training. Retrofitting doesn't seem to help at all for the Google embeddings, but it does boost the Facebook embeddings, leading to think that with little data, more accurate task-related information is helping, but corpus size matters most. Third, in combination with embeddings, all features work better than just using tf-idf, but removing the Lexicon feature, which is the only one based on hand-crafted resources, yields even better results. Then our best model (B-M) on development data relies entirely on automatically obtained information, both in terms of training data as well as features."
],
[
"In Table TABREF26 we report the results of our model on the three datasets standardly used for the evaluation of emotion classification, which we have described in Section SECREF3 .",
"Our B-M model relies on subsets of Facebook pages for training, which were chosen according to their performance on the development set as well as on the observation of emotions distribution on different pages and in the different datasets, as described in Section SECREF4 . The feature set we use is our best on the development set, namely all the features plus Google-based embeddings, but excluding the lexicon. This makes our approach completely independent of any manual annotation or handcrafted resource. Our model's performance is compared to the following systems, for which results are reported in the referred literature. Please note that no other existing model was re-implemented, and results are those reported in the respective papers."
],
[
"We have explored the potential of using Facebook reactions in a distant supervised setting to perform emotion classification. The evaluation on standard benchmarks shows that models trained as such, especially when enhanced with continuous vector representations, can achieve competitive results without relying on any handcrafted resource. An interesting aspect of our approach is the view to domain adaptation via the selection of Facebook pages to be used as training data.",
"We believe that this approach has a lot of potential, and we see the following directions for improvement. Feature-wise, we want to train emotion-aware embeddings, in the vein of work by tang:14, and iacobacci2015sensembed. Retrofitting FB-embeddings trained on a larger corpus might also be successful, but would rely on an external lexicon.",
"The largest room for yielding not only better results but also interesting insights on extensions of this approach lies in the choice of training instances, both in terms of Facebook pages to get posts from, as well as in which posts to select from the given pages. For the latter, one could for example only select posts that have a certain length, ignore posts that are only quotes or captions to images, or expand posts by including content from linked html pages, which might provide larger and better contexts BIBREF23 . Additionally, and most importantly, one could use an entropy-based measure to select only posts that have a strong emotion rather than just considering the majority emotion as training label. For the former, namely the choice of Facebook pages, which we believe deserves the most investigation, one could explore several avenues, especially in relation to stance-based issues BIBREF24 . In our dataset, for example, a post about Chile beating Colombia in a football match during the Copa America had very contradictory reactions, depending on which side readers would cheer for. Similarly, the very same political event, for example, would get very different reactions from readers if it was posted on Fox News or The Late Night Show, as the target audience is likely to feel very differently about the same issue. This also brings up theoretical issues related more generally to the definition of the emotion detection task, as it's strongly dependent on personal traits of the audience. Also, in this work, pages initially selected on availability and intuition were further grouped into sets to make training data according to performance on development data, and label distribution. Another criterion to be exploited would be vocabulary overlap between the pages and the datasets.",
"Lastly, we could develop single models for each emotion, treating the problem as a multi-label task. This would even better reflect the ambiguity and subjectivity intrinsic to assigning emotions to text, where content could be at same time joyful or sad, depending on the reader."
],
[
"In addition to the anonymous reviewers, we want to thank Lucia Passaro and Barbara Plank for insightful discussions, and for providing comments on draft versions of this paper."
]
],
"section_name": [
"Introduction",
"Facebook reactions as labels",
"Emotion datasets",
"Affective Text dataset",
"Fairy Tales dataset",
"ISEAR",
"Overview of datasets and emotions",
"Model",
"Selecting Facebook pages",
"Features",
"Results on development set",
"Results",
"Discussion, conclusions and future work",
"Acknowledgements"
]
} | {
"answers": [
{
"annotation_id": [
"a5c902e37d0e1cd5150a0c44a2b0f68bb8c2f1ed"
],
"answer": [
{
"evidence": [
"In Table TABREF26 we report the results of our model on the three datasets standardly used for the evaluation of emotion classification, which we have described in Section SECREF3 ."
],
"extractive_spans": [],
"free_form_answer": "Answer with content missing: (Table 3) Best author's model B-M average micro f-score is 0.409, 0.459, 0.411 on Affective, Fairy Tales and ISEAR datasets respectively. ",
"highlighted_evidence": [
"In Table TABREF26 we report the results of our model on the three datasets standardly used for the evaluation of emotion classification, which we have described in Section SECREF3 ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"5dfbb5210ea58b1ef9cbe761ccb3fda267932664",
"74cc2d55aa4c1b9b2553bd40fbf59c4990c928df"
],
"answer": [
{
"evidence": [
"Three datasets annotated with emotions are commonly used for the development and evaluation of emotion detection systems, namely the Affective Text dataset, the Fairy Tales dataset, and the ISEAR dataset. In order to compare our performance to state-of-the-art results, we have used them as well. In this Section, in addition to a description of each dataset, we provide an overview of the emotions used, their distribution, and how we mapped them to those we obtained from Facebook posts in Section SECREF7 . A summary is provided in Table TABREF8 , which also shows, in the bottom row, what role each dataset has in our experiments: apart from the development portion of the Affective Text, which we used to develop our models (Section SECREF4 ), all three have been used as benchmarks for our evaluation."
],
"extractive_spans": [
"Affective Text",
"Fairy Tales",
"ISEAR"
],
"free_form_answer": "",
"highlighted_evidence": [
"Three datasets annotated with emotions are commonly used for the development and evaluation of emotion detection systems, namely the Affective Text dataset, the Fairy Tales dataset, and the ISEAR dataset.",
"A summary is provided in Table TABREF8 , which also shows, in the bottom row, what role each dataset has in our experiments: apart from the development portion of the Affective Text, which we used to develop our models (Section SECREF4 ), all three have been used as benchmarks for our evaluation."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Three datasets annotated with emotions are commonly used for the development and evaluation of emotion detection systems, namely the Affective Text dataset, the Fairy Tales dataset, and the ISEAR dataset. In order to compare our performance to state-of-the-art results, we have used them as well. In this Section, in addition to a description of each dataset, we provide an overview of the emotions used, their distribution, and how we mapped them to those we obtained from Facebook posts in Section SECREF7 . A summary is provided in Table TABREF8 , which also shows, in the bottom row, what role each dataset has in our experiments: apart from the development portion of the Affective Text, which we used to develop our models (Section SECREF4 ), all three have been used as benchmarks for our evaluation."
],
"extractive_spans": [
" Affective Text dataset",
"Fairy Tales dataset",
"ISEAR dataset"
],
"free_form_answer": "",
"highlighted_evidence": [
"Three datasets annotated with emotions are commonly used for the development and evaluation of emotion detection systems, namely the Affective Text dataset, the Fairy Tales dataset, and the ISEAR dataset."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"7be7cbbad537d942af36140ae2810e5028d26f3f",
"baefc4512d7307b4d906dcece09908dfeda69632"
],
"answer": [
{
"evidence": [
"We collected Facebook posts and their corresponding reactions from public pages using the Facebook API, which we accessed via the Facebook-sdk python library. We chose different pages (and therefore domains and stances), aiming at a balanced and varied dataset, but we did so mainly based on intuition (see Section SECREF4 ) and with an eye to the nature of the datasets available for evaluation (see Section SECREF5 ). The choice of which pages to select posts from is far from trivial, and we believe this is actually an interesting aspect of our approach, as by using different Facebook pages one can intrinsically tackle the domain-adaptation problem (See Section SECREF6 for further discussion on this). The final collection of Facebook pages for the experiments described in this paper is as follows: FoxNews, CNN, ESPN, New York Times, Time magazine, Huffington Post Weird News, The Guardian, Cartoon Network, Cooking Light, Home Cooking Adventure, Justin Bieber, Nickelodeon, Spongebob, Disney."
],
"extractive_spans": [
"FoxNews, CNN, ESPN, New York Times, Time magazine, Huffington Post Weird News, The Guardian, Cartoon Network, Cooking Light, Home Cooking Adventure, Justin Bieber, Nickelodeon, Spongebob, Disney"
],
"free_form_answer": "",
"highlighted_evidence": [
"The final collection of Facebook pages for the experiments described in this paper is as follows: FoxNews, CNN, ESPN, New York Times, Time magazine, Huffington Post Weird News, The Guardian, Cartoon Network, Cooking Light, Home Cooking Adventure, Justin Bieber, Nickelodeon, Spongebob, Disney."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We collected Facebook posts and their corresponding reactions from public pages using the Facebook API, which we accessed via the Facebook-sdk python library. We chose different pages (and therefore domains and stances), aiming at a balanced and varied dataset, but we did so mainly based on intuition (see Section SECREF4 ) and with an eye to the nature of the datasets available for evaluation (see Section SECREF5 ). The choice of which pages to select posts from is far from trivial, and we believe this is actually an interesting aspect of our approach, as by using different Facebook pages one can intrinsically tackle the domain-adaptation problem (See Section SECREF6 for further discussion on this). The final collection of Facebook pages for the experiments described in this paper is as follows: FoxNews, CNN, ESPN, New York Times, Time magazine, Huffington Post Weird News, The Guardian, Cartoon Network, Cooking Light, Home Cooking Adventure, Justin Bieber, Nickelodeon, Spongebob, Disney."
],
"extractive_spans": [
"FoxNews, CNN, ESPN, New York Times, Time magazine, Huffington Post Weird News, The Guardian, Cartoon Network, Cooking Light, Home Cooking Adventure, Justin Bieber, Nickelodeon, Spongebob, Disney."
],
"free_form_answer": "",
"highlighted_evidence": [
"The final collection of Facebook pages for the experiments described in this paper is as follows: FoxNews, CNN, ESPN, New York Times, Time magazine, Huffington Post Weird News, The Guardian, Cartoon Network, Cooking Light, Home Cooking Adventure, Justin Bieber, Nickelodeon, Spongebob, Disney."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
],
"nlp_background": [
"",
"",
""
],
"paper_read": [
"",
"",
""
],
"question": [
"What was their performance on emotion detection?",
"Which existing benchmarks did they compare to?",
"Which Facebook pages did they look at?"
],
"question_id": [
"fa30a938b58fc05131c3854f12efe376cbad887f",
"f875337f2ecd686cd7789e111174d0f14972638d",
"de53af4eddbc30c808d90b8a11a29217d377569e"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"",
"",
""
]
} | {
"caption": [
"Table 1: Emotion labels in existing datasets, Facebook, and resulting mapping for the experiments in this work. The last row indicates which role each dataset has in our experiments.",
"Figure 3: Emotion distribution in the datasets Figure 4: Emotion distribution per Facebook page",
"Table 2: Results on the development set (Affective development). avg f is the micro-averaged f-score.",
"Table 3: Results on test datasets according to Precision, Recall and F-score."
],
"file": [
"4-Table1-1.png",
"4-Figure3-1.png",
"6-Table2-1.png",
"8-Table3-1.png"
]
} | [
"What was their performance on emotion detection?"
] | [
[
"1611.02988-Results-0"
]
] | [
"Answer with content missing: (Table 3) Best author's model B-M average micro f-score is 0.409, 0.459, 0.411 on Affective, Fairy Tales and ISEAR datasets respectively. "
] | 203 |
1604.08504 | Detecting"Smart"Spammers On Social Network: A Topic Model Approach | Spammer detection on social network is a challenging problem. The rigid anti-spam rules have resulted in emergence of"smart"spammers. They resemble legitimate users who are difficult to identify. In this paper, we present a novel spammer classification approach based on Latent Dirichlet Allocation(LDA), a topic model. Our approach extracts both the local and the global information of topic distribution patterns, which capture the essence of spamming. Tested on one benchmark dataset and one self-collected dataset, our proposed method outperforms other state-of-the-art methods in terms of averaged F1-score. | {
"paragraphs": [
[
"Microblogging such as Twitter and Weibo is a popular social networking service, which allows users to post messages up to 140 characters. There are millions of active users on the platform who stay connected with friends. Unfortunately, spammers also use it as a tool to post malicious links, send unsolicited messages to legitimate users, etc. A certain amount of spammers could sway the public opinion and cause distrust of the social platform. Despite the use of rigid anti-spam rules, human-like spammers whose homepages having photos, detailed profiles etc. have emerged. Unlike previous \"simple\" spammers, whose tweets contain only malicious links, those \"smart\" spammers are more difficult to distinguish from legitimate users via content-based features alone BIBREF0 .",
"There is a considerable amount of previous work on spammer detection on social platforms. Researcher from Twitter Inc. BIBREF1 collect bot accounts and perform analysis on the user behavior and user profile features. Lee et al. lee2011seven use the so-called social honeypot by alluring social spammers' retweet to build a benchmark dataset, which has been extensively explored in our paper. Some researchers focus on the clustering of urls in tweets and network graph of social spammers BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , showing the power of social relationship features.As for content information modeling, BIBREF6 apply improved sparse learning methods. However, few studies have adopted topic-based features. Some researchers BIBREF7 discuss topic characteristics of spamming posts, indicating that spammers are highly likely to dwell on some certain topics such as promotion. But this may not be applicable to the current scenario of smart spammers.",
"In this paper, we propose an efficient feature extraction method. In this method, two new topic-based features are extracted and used to discriminate human-like spammers from legitimate users. We consider the historical tweets of each user as a document and use the Latent Dirichlet Allocation (LDA) model to compute the topic distribution for each user. Based on the calculated topic probability, two topic-based features, the Local Outlier Standard Score (LOSS) which captures the user's interests on different topics and the Global Outlier Standard Score (GOSS) which reveals the user's interests on specific topic in comparison with other users', are extracted. The two features contain both local and global information, and the combination of them can distinguish human-like spammers effectively.",
"To the best of our knowledge, it is the first time that features based on topic distributions are used in spammer classification. Experimental results on one public dataset and one self-collected dataset further validate that the two sets of extracted topic-based features get excellent performance on human-like spammer classification problem compared with other state-of-the-art methods. In addition, we build a Weibo dataset, which contains both legitimate users and spammers.",
"To summarize, our major contributions are two-fold:",
"In the following sections, we first propose the topic-based features extraction method in Section 2, and then introduce the two datasets in Section 3. Experimental results are discussed in Section 4, and we conclude the paper in Section 5. Future work is presented in Section 6."
],
[
"In this section, we first provide some observations we obtained after carefully exploring the social network, then the LDA model is introduced. Based on the LDA model, the ways to obtain the topic probability vector for each user and the two topic-based features are provided."
],
[
"After exploring the homepages of a substantial number of spammers, we have two observations. 1) social spammers can be divided into two categories. One is content polluters, and their tweets are all about certain kinds of advertisement and campaign. The other is fake accounts, and their tweets resemble legitimate users' but it seems they are simply random copies of others to avoid being detected by anti-spam rules. 2) For legitimate users, content polluters and fake accounts, they show different patterns on topics which interest them.",
"Legitimate users mainly focus on limited topics which interest him. They seldom post contents unrelated to their concern.",
"Content polluters concentrate on certain topics.",
"Fake accounts focus on a wide range of topics due to random copying and retweeting of other users' tweets.",
"Spammers and legitimate users show different interests on some topics e.g. commercial, weather, etc.",
"To better illustrate our observation, Figure. 1 shows the topic distribution of spammers and legitimate users in two employed datasets(the Honeypot dataset and Weibo dataset). We can see that on both topics (topic-3 and topic-11) there exists obvious difference between the red bars and green bars, representing spammers and legitimate users. On the Honeypot dataset, spammers have a narrower shape of distribution (the outliers on the red bar tail are not counted) than that of legitimate users. This is because there are more content polluters than fake accounts. In other word, spammers in this dataset tend to concentrate on limited topics. While on the Weibo dataset, fake accounts who are interested in different topics take large proportion of spammers. Their distribution is more flat (i.e. red bars) than that of the legitimate users. Therefore we can detect spammers by means of the difference of their topic distribution patterns."
],
[
"Blei et al.blei2003latent first presented Latent Dirichlet Allocation(LDA) as an example of topic model.",
"Each document $i$ is deemed as a bag of words $W=\\left\\lbrace w_{i1},w_{i2},...,w_{iM}\\right\\rbrace $ and $M$ is the number of words. Each word is attributable to one of the document's topics $Z=\\left\\lbrace z_{i1},z_{i2},...,z_{iK}\\right\\rbrace $ and $K$ is the number of topics. $\\psi _{k}$ is a multinomial distribution over words for topic $k$ . $\\theta _i$ is another multinomial distribution over topics for document $i$ . The smoothed generative model is illustrated in Figure. 2 . $\\alpha $ and $W=\\left\\lbrace w_{i1},w_{i2},...,w_{iM}\\right\\rbrace $0 are hyper parameter that affect scarcity of the document-topic and topic-word distributions. In this paper, $W=\\left\\lbrace w_{i1},w_{i2},...,w_{iM}\\right\\rbrace $1 , $W=\\left\\lbrace w_{i1},w_{i2},...,w_{iM}\\right\\rbrace $2 and $W=\\left\\lbrace w_{i1},w_{i2},...,w_{iM}\\right\\rbrace $3 are empirically set to 0.3, 0.01 and 15. The entire content of each Twitter user is regarded as one document. We adopt Gibbs Sampling BIBREF8 to speed up the inference of LDA. Based on LDA, we can get the topic probabilities for all users in the employed dataset as: $W=\\left\\lbrace w_{i1},w_{i2},...,w_{iM}\\right\\rbrace $4 , where $W=\\left\\lbrace w_{i1},w_{i2},...,w_{iM}\\right\\rbrace $5 is the number of users. Each element $W=\\left\\lbrace w_{i1},w_{i2},...,w_{iM}\\right\\rbrace $6 is a topic probability vector for the $W=\\left\\lbrace w_{i1},w_{i2},...,w_{iM}\\right\\rbrace $7 document. $W=\\left\\lbrace w_{i1},w_{i2},...,w_{iM}\\right\\rbrace $8 is the raw topic probability vector and our features are developed on top of it."
],
[
"Using the LDA model, each person in the dataset is with a topic probability vector $X_i$ . Assume $x_{ik}\\in X_{i}$ denotes the likelihood that the $\\emph {i}^{th}$ tweet account favors $\\emph {k}^{th}$ topic in the dataset. Our topic based features can be calculated as below.",
"Global Outlier Standard Score measures the degree that a user's tweet content is related to a certain topic compared to the other users. Specifically, the \"GOSS\" score of user $i$ on topic $k$ can be calculated as Eq.( 12 ): ",
"$$\\centering \\begin{array}{ll}\n\\mu \\left(x_{k}\\right)=\\frac{\\sum _{i=1}^{n} x_{ik}}{n},\\\\\nGOSS\\left(x_{ik}\\right)=\\frac{x_{ik}-\\mu \\left(x_k\\right)}{\\sqrt{\\underset{i}{\\sum }\\left(x_{ik}-\\mu \\left(x_{k}\\right)\\right)^{2}}}.\n\\end{array}$$ (Eq. 12) ",
"The value of $GOSS\\left(x_{ik}\\right)$ indicates the interesting degree of this person to the $\\emph {k}^{th}$ topic. Specifically, if $GOSS\\left(x_{ik}\\right)$ > $GOSS\\left(x_{jk}\\right)$ , it means that the $\\emph {i}^{th}$ person has more interest in topic $k$ than the $\\emph {j}^{th}$ person. If the value $GOSS\\left(x_{ik}\\right)$ is extremely high or low, the $\\emph {i}^{th}$ person showing extreme interest or no interest on topic $k$ which will probably be a distinctive pattern in the fowllowing classfication. Therefore, the topics interested or disliked by the $\\emph {k}^{th}$0 person can be manifested by $\\emph {k}^{th}$1 , from which the pattern of the interested topics with regarding to this person is found. Denote $\\emph {k}^{th}$2 our first topic-based feature, and it hopefully can get good performance on spammer detection.",
"Local Outlier Standard Score measures the degree of interest someone shows to a certain topic by considering his own homepage content only. For instance, the \"LOSS\" score of account $i$ on topic $k$ can be calculated as Eq.( 13 ): ",
"$$\\centering \\begin{array}{ll}\n\\mu \\left(x_{i}\\right)=\\frac{\\sum _{k=1}^{K} x_{ik}}{K},\\\\\nLOSS\\left(x_{ik}\\right)=\\frac{x_{ik}-\\mu \\left(x_i\\right)}{\\sqrt{\\underset{k}{\\sum }\\left(x_{ik}-\\mu \\left(x_{i}\\right)\\right)^{2}}}.\n\\end{array}$$ (Eq. 13) ",
" $\\mu (x_i)$ represents the averaged interesting degree for all topics with regarding to $\\emph {i}^{th}$ user and his tweet content. Similarly to $GOSS$ , the topics interested or disliked by the $\\emph {i}^{th}$ person via considering his single post information can be manifested by $f_{LOSS}^{i}=[LOSS(x_{i1})\\cdots LOSS(x_{iK})]$ , and $LOSS$ becomes our second topic-based features for the $\\emph {i}^{th}$ person."
],
[
"We use one public dataset Social Honeypot dataset and one self-collected dataset Weibo dataset to validate the effectiveness of our proposed features.",
"Social Honeypot Dataset: Lee et al. lee2010devils created and deployed 60 seed social accounts on Twitter to attract spammers by reporting back what accounts interact with them. They collected 19,276 legitimate users and 22,223 spammers in their datasets along with their tweet content in 7 months. This is our first test dataset.",
"Our Weibo Dataset: Sina Weibo is one of the most famous social platforms in China. It has implemented many features from Twitter. The 2197 legitimate user accounts in this dataset are provided by the Tianchi Competition held by Sina Weibo. The spammers are all purchased commercially from multiple vendors on the Internet. We checked them manually and collected 802 suitable \"smart\" spammers accounts.",
"Preprocessing: Before directly performing the experiments on the employed datasets, we first delete some accounts with few posts in the two employed since the number of tweets is highly indicative of spammers. For the English Honeypot dataset, we remove stopwords, punctuations, non-ASCII words and apply stemming. For the Chinese Weibo dataset, we perform segmentation with \"Jieba\", a Chinese text segmentation tool. After preprocessing steps, the Weibo dataset contains 2197 legitimate users and 802 spammers, and the honeypot dataset contains 2218 legitimate users and 2947 spammers. It is worth mentioning that the Honeypot dataset has been slashed because most of the Twitter accounts only have limited number of posts, which are not enough to show their interest inclination."
],
[
"The evaluating indicators in our model are show in 2 . We calculate precision, recall and F1-score (i.e. F1 score) as in Eq. ( 19 ). Precision is the ratio of selected accounts that are spammers. Recall is the ratio of spammers that are detected so. F1-score is the harmonic mean of precision and recall. ",
"$$precision =\\frac{TP}{TP+FP},\nrecall =\\frac{TP}{TP+FN}\\nonumber \\\\\nF1-score = \\frac{2\\times precision \\times recall}{precision + recall}$$ (Eq. 19) "
],
[
"Three baseline classification methods: Support Vector Machines (SVM), Adaboost, and Random Forests are adopted to evaluate our extracted features. We test each classification algorithm with scikit-learn BIBREF9 and run a 10-fold cross validation. On each dataset, the employed classifiers are trained with individual feature first, and then with the combination of the two features. From 1 , we can see that GOSS+LOSS achieves the best performance on F1-score among all others. Besides, the classification by combination of LOSS and GOSS can increase accuracy by more than 3% compared with raw topic distribution probability."
],
[
"To compare our extracted features with previously used features for spammer detection, we use three most discriminative feature sets according to Lee et al. lee2011seven( 4 ). Two classifiers (Adaboost and SVM) are selected to conduct feature performance comparisons. Using Adaboost, our LOSS+GOSS features outperform all other features except for UFN which is 2% higher than ours with regard to precision on the Honeypot dataset. It is caused by the incorrectly classified spammers who are mostly news source after our manual check. They keep posting all kinds of news pieces covering diverse topics, which is similar to the behavior of fake accounts. However, UFN based on friendship networks is more useful for public accounts who possess large number of followers. The best recall value of our LOSS+GOSS features using SVM is up to 6% higher than the results by other feature groups. Regarding F1-score, our features outperform all other features. To further show the advantages of our proposed features, we compare our combined LOSS+GOSS with the combination of all the features from Lee et al. lee2011seven (UFN+UC+UH). It's obvious that LOSS+GOSS have a great advantage over UFN+UC+UH in terms of recall and F1-score. Moreover, by combining our LOSS+GOSS features and UFN+UC+UH features together, we obtained another 7.1% and 2.3% performance gain with regard to precision and F1-score by Adaboost. Though there is a slight decline in terms of recall. By SVM, we get comparative results on recall and F1-score but about 3.5% improvement on precision."
],
[
"In this paper, we propose a novel feature extraction method to effectively detect \"smart\" spammers who post seemingly legitimate tweets and are thus difficult to identify by existing spammer classification methods. Using the LDA model, we obtain the topic probability for each Twitter user. By utilizing the topic probability result, we extract our two topic-based features: GOSS and LOSS which represent the account with global and local information. Experimental results on a public dataset and a self-built Chinese microblog dataset validate the effectiveness of the proposed features."
],
[
"In future work, the combination method of local and global information can be further improved to maximize their individual strengths. We will also apply decision theory to enhancing the performance of our proposed features. Moreover, we are also building larger datasets on both Twitter and Weibo to validate our method. Moreover, larger datasets on both Twitter and Weibo will be built to further validate our method."
]
],
"section_name": [
"Introduction",
"Methodology",
"Observation",
"LDA model",
"Topic-based Features",
"Dataset",
"Evaluation Metrics",
"Performance Comparisons with Baseline",
"Comparison with Other Features",
"Conclusion",
"Future Work"
]
} | {
"answers": [
{
"annotation_id": [
"6e2b7e26d39d894900c57bb95a143e122282fb17",
"ce70b0cd6afffe44d67d97f58a63df57d4d87b85"
],
"answer": [
{
"evidence": [
"Three baseline classification methods: Support Vector Machines (SVM), Adaboost, and Random Forests are adopted to evaluate our extracted features. We test each classification algorithm with scikit-learn BIBREF9 and run a 10-fold cross validation. On each dataset, the employed classifiers are trained with individual feature first, and then with the combination of the two features. From 1 , we can see that GOSS+LOSS achieves the best performance on F1-score among all others. Besides, the classification by combination of LOSS and GOSS can increase accuracy by more than 3% compared with raw topic distribution probability."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Three baseline classification methods: Support Vector Machines (SVM), Adaboost, and Random Forests are adopted to evaluate our extracted features.",
"On each dataset, the employed classifiers are trained with individual feature first, and then with the combination of the two features."
],
"unanswerable": false,
"yes_no": false
},
{
"evidence": [
"Three baseline classification methods: Support Vector Machines (SVM), Adaboost, and Random Forests are adopted to evaluate our extracted features. We test each classification algorithm with scikit-learn BIBREF9 and run a 10-fold cross validation. On each dataset, the employed classifiers are trained with individual feature first, and then with the combination of the two features. From 1 , we can see that GOSS+LOSS achieves the best performance on F1-score among all others. Besides, the classification by combination of LOSS and GOSS can increase accuracy by more than 3% compared with raw topic distribution probability."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Three baseline classification methods: Support Vector Machines (SVM), Adaboost, and Random Forests are adopted to evaluate our extracted features. "
],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"ca2a4695129d0180768a955fb5910d639f79aa34",
"c7d4a630661cd719ea504dba56393f78278b296b"
]
},
{
"annotation_id": [
"a62db50ac378e2b1e120f85a4d1de913cf96f8a0",
"d04ec21f3ee903ed05b3f9f5756efa1c26f9a3c7"
],
"answer": [
{
"evidence": [
"We use one public dataset Social Honeypot dataset and one self-collected dataset Weibo dataset to validate the effectiveness of our proposed features.",
"Preprocessing: Before directly performing the experiments on the employed datasets, we first delete some accounts with few posts in the two employed since the number of tweets is highly indicative of spammers. For the English Honeypot dataset, we remove stopwords, punctuations, non-ASCII words and apply stemming. For the Chinese Weibo dataset, we perform segmentation with \"Jieba\", a Chinese text segmentation tool. After preprocessing steps, the Weibo dataset contains 2197 legitimate users and 802 spammers, and the honeypot dataset contains 2218 legitimate users and 2947 spammers. It is worth mentioning that the Honeypot dataset has been slashed because most of the Twitter accounts only have limited number of posts, which are not enough to show their interest inclination."
],
"extractive_spans": [],
"free_form_answer": "Social Honeypot dataset (public) and Weibo dataset (self-collected); yes",
"highlighted_evidence": [
"We use one public dataset Social Honeypot dataset and one self-collected dataset Weibo dataset to validate the effectiveness of our proposed features.",
"Before directly performing the experiments on the employed datasets, we first delete some accounts with few posts in the two employed since the number of tweets is highly indicative of spammers. For the English Honeypot dataset, we remove stopwords, punctuations, non-ASCII words and apply stemming. For the Chinese Weibo dataset, we perform segmentation with \"Jieba\", a Chinese text segmentation tool. After preprocessing steps, the Weibo dataset contains 2197 legitimate users and 802 spammers, and the honeypot dataset contains 2218 legitimate users and 2947 spammers."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We use one public dataset Social Honeypot dataset and one self-collected dataset Weibo dataset to validate the effectiveness of our proposed features.",
"Preprocessing: Before directly performing the experiments on the employed datasets, we first delete some accounts with few posts in the two employed since the number of tweets is highly indicative of spammers. For the English Honeypot dataset, we remove stopwords, punctuations, non-ASCII words and apply stemming. For the Chinese Weibo dataset, we perform segmentation with \"Jieba\", a Chinese text segmentation tool. After preprocessing steps, the Weibo dataset contains 2197 legitimate users and 802 spammers, and the honeypot dataset contains 2218 legitimate users and 2947 spammers. It is worth mentioning that the Honeypot dataset has been slashed because most of the Twitter accounts only have limited number of posts, which are not enough to show their interest inclination.",
"To compare our extracted features with previously used features for spammer detection, we use three most discriminative feature sets according to Lee et al. lee2011seven( 4 ). Two classifiers (Adaboost and SVM) are selected to conduct feature performance comparisons. Using Adaboost, our LOSS+GOSS features outperform all other features except for UFN which is 2% higher than ours with regard to precision on the Honeypot dataset. It is caused by the incorrectly classified spammers who are mostly news source after our manual check. They keep posting all kinds of news pieces covering diverse topics, which is similar to the behavior of fake accounts. However, UFN based on friendship networks is more useful for public accounts who possess large number of followers. The best recall value of our LOSS+GOSS features using SVM is up to 6% higher than the results by other feature groups. Regarding F1-score, our features outperform all other features. To further show the advantages of our proposed features, we compare our combined LOSS+GOSS with the combination of all the features from Lee et al. lee2011seven (UFN+UC+UH). It's obvious that LOSS+GOSS have a great advantage over UFN+UC+UH in terms of recall and F1-score. Moreover, by combining our LOSS+GOSS features and UFN+UC+UH features together, we obtained another 7.1% and 2.3% performance gain with regard to precision and F1-score by Adaboost. Though there is a slight decline in terms of recall. By SVM, we get comparative results on recall and F1-score but about 3.5% improvement on precision."
],
"extractive_spans": [],
"free_form_answer": "Social Honeypot, which is not of high quality",
"highlighted_evidence": [
"We use one public dataset Social Honeypot dataset and one self-collected dataset Weibo dataset to validate the effectiveness of our proposed features.",
"It is worth mentioning that the Honeypot dataset has been slashed because most of the Twitter accounts only have limited number of posts, which are not enough to show their interest inclination.",
"Using Adaboost, our LOSS+GOSS features outperform all other features except for UFN which is 2% higher than ours with regard to precision on the Honeypot dataset. It is caused by the incorrectly classified spammers who are mostly news source after our manual check."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c7d4a630661cd719ea504dba56393f78278b296b",
"ca2a4695129d0180768a955fb5910d639f79aa34"
]
},
{
"annotation_id": [
"b972ded659f23ac147eb2407cd7a74f6bd671397"
],
"answer": [
{
"evidence": [
"Using the LDA model, each person in the dataset is with a topic probability vector $X_i$ . Assume $x_{ik}\\in X_{i}$ denotes the likelihood that the $\\emph {i}^{th}$ tweet account favors $\\emph {k}^{th}$ topic in the dataset. Our topic based features can be calculated as below.",
"Global Outlier Standard Score measures the degree that a user's tweet content is related to a certain topic compared to the other users. Specifically, the \"GOSS\" score of user $i$ on topic $k$ can be calculated as Eq.( 12 ):",
"Local Outlier Standard Score measures the degree of interest someone shows to a certain topic by considering his own homepage content only. For instance, the \"LOSS\" score of account $i$ on topic $k$ can be calculated as Eq.( 13 ):",
"Three baseline classification methods: Support Vector Machines (SVM), Adaboost, and Random Forests are adopted to evaluate our extracted features. We test each classification algorithm with scikit-learn BIBREF9 and run a 10-fold cross validation. On each dataset, the employed classifiers are trained with individual feature first, and then with the combination of the two features. From 1 , we can see that GOSS+LOSS achieves the best performance on F1-score among all others. Besides, the classification by combination of LOSS and GOSS can increase accuracy by more than 3% compared with raw topic distribution probability."
],
"extractive_spans": [],
"free_form_answer": "Extract features from the LDA model and use them in a binary classification task",
"highlighted_evidence": [
"Using the LDA model, each person in the dataset is with a topic probability vector $X_i$ . Assume $x_{ik}\\in X_{i}$ denotes the likelihood that the $\\emph {i}^{th}$ tweet account favors $\\emph {k}^{th}$ topic in the dataset. Our topic based features can be calculated as below.",
"Global Outlier Standard Score measures the degree that a user's tweet content is related to a certain topic compared to the other users.",
"Local Outlier Standard Score measures the degree of interest someone shows to a certain topic by considering his own homepage content only.",
"Three baseline classification methods: Support Vector Machines (SVM), Adaboost, and Random Forests are adopted to evaluate our extracted features."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c7d4a630661cd719ea504dba56393f78278b296b"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"LDA is an unsupervised method; is this paper introducing an unsupervised approach to spam detection?",
"What is the benchmark dataset and is its quality high?",
"How do they detect spammers?"
],
"question_id": [
"dac087e1328e65ca08f66d8b5307d6624bf3943f",
"a1645d0ba50e4c29f0feb806521093e7b1459081",
"3cd185b7adc835e1c4449eff81222f5fc15c8500"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"social",
"social",
"social"
],
"topic_background": [
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 1: The topic distribution of legitimate users and social spammers on Honeypot dataset (left) and on Weibo dataset (right), respectively.",
"Figure 2: The generative model of LDA",
"Table 1: Performance comparisons for our features with three baseline classifiers",
"Table 4: Honeypot Feature Groups",
"Table 2: Confusion matrix",
"Table 3: Comparisons of our features and Lee et al.’s features"
],
"file": [
"2-Figure1-1.png",
"3-Figure2-1.png",
"4-Table1-1.png",
"4-Table4-1.png",
"4-Table2-1.png",
"5-Table3-1.png"
]
} | [
"What is the benchmark dataset and is its quality high?",
"How do they detect spammers?"
] | [
[
"1604.08504-Dataset-3",
"1604.08504-Comparison with Other Features-0",
"1604.08504-Dataset-0"
],
[
"1604.08504-Topic-based Features-0",
"1604.08504-Performance Comparisons with Baseline-0"
]
] | [
"Social Honeypot, which is not of high quality",
"Extract features from the LDA model and use them in a binary classification task"
] | 204 |
1802.08636 | Ranking Sentences for Extractive Summarization with Reinforcement Learning | Single document summarization is the task of producing a shorter version of a document while preserving its principal information content. In this paper we conceptualize extractive summarization as a sentence ranking task and propose a novel training algorithm which globally optimizes the ROUGE evaluation metric through a reinforcement learning objective. We use our algorithm to train a neural summarization model on the CNN and DailyMail datasets and demonstrate experimentally that it outperforms state-of-the-art extractive and abstractive systems when evaluated automatically and by humans. | {
"paragraphs": [
[
"Automatic summarization has enjoyed wide popularity in natural language processing due to its potential for various information access applications. Examples include tools which aid users navigate and digest web content (e.g., news, social media, product reviews), question answering, and personalized recommendation engines. Single document summarization — the task of producing a shorter version of a document while preserving its information content — is perhaps the most basic of summarization tasks that have been identified over the years (see BIBREF0 , BIBREF0 for a comprehensive overview).",
"Modern approaches to single document summarization are data-driven, taking advantage of the success of neural network architectures and their ability to learn continuous features without recourse to preprocessing tools or linguistic annotations. Abstractive summarization involves various text rewriting operations (e.g., substitution, deletion, reordering) and has been recently framed as a sequence-to-sequence problem BIBREF1 . Central in most approaches BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 is an encoder-decoder architecture modeled by recurrent neural networks. The encoder reads the source sequence into a list of continuous-space representations from which the decoder generates the target sequence. An attention mechanism BIBREF8 is often used to locate the region of focus during decoding.",
"Extractive systems create a summary by identifying (and subsequently concatenating) the most important sentences in a document. A few recent approaches BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 conceptualize extractive summarization as a sequence labeling task in which each label specifies whether each document sentence should be included in the summary. Existing models rely on recurrent neural networks to derive a meaning representation of the document which is then used to label each sentence, taking the previously labeled sentences into account. These models are typically trained using cross-entropy loss in order to maximize the likelihood of the ground-truth labels and do not necessarily learn to rank sentences based on their importance due to the absence of a ranking-based objective. Another discrepancy comes from the mismatch between the learning objective and the evaluation criterion, namely ROUGE BIBREF13 , which takes the entire summary into account.",
"In this paper we argue that cross-entropy training is not optimal for extractive summarization. Models trained this way are prone to generating verbose summaries with unnecessarily long sentences and redundant information. We propose to overcome these difficulties by globally optimizing the ROUGE evaluation metric and learning to rank sentences for summary generation through a reinforcement learning objective. Similar to previous work BIBREF9 , BIBREF11 , BIBREF10 , our neural summarization model consists of a hierarchical document encoder and a hierarchical sentence extractor. During training, it combines the maximum-likelihood cross-entropy loss with rewards from policy gradient reinforcement learning to directly optimize the evaluation metric relevant for the summarization task. We show that this global optimization framework renders extractive models better at discriminating among sentences for the final summary; a sentence is ranked high for selection if it often occurs in high scoring summaries.",
"We report results on the CNN and DailyMail news highlights datasets BIBREF14 which have been recently used as testbeds for the evaluation of neural summarization systems. Experimental results show that when evaluated automatically (in terms of ROUGE), our model outperforms state-of-the-art extractive and abstractive systems. We also conduct two human evaluations in order to assess (a) which type of summary participants prefer (we compare extractive and abstractive systems) and (b) how much key information from the document is preserved in the summary (we ask participants to answer questions pertaining to the content in the document by reading system summaries). Both evaluations overwhelmingly show that human subjects find our summaries more informative and complete.",
"Our contributions in this work are three-fold: a novel application of reinforcement learning to sentence ranking for extractive summarization; corroborated by analysis and empirical results showing that cross-entropy training is not well-suited to the summarization task; and large scale user studies following two evaluation paradigms which demonstrate that state-of-the-art abstractive systems lag behind extractive ones when the latter are globally trained."
],
[
"Given a document D consisting of a sequence of sentences INLINEFORM0 , an extractive summarizer aims to produce a summary INLINEFORM1 by selecting INLINEFORM2 sentences from D (where INLINEFORM3 ). For each sentence INLINEFORM4 , we predict a label INLINEFORM5 (where 1 means that INLINEFORM6 should be included in the summary) and assign a score INLINEFORM7 quantifying INLINEFORM8 's relevance to the summary. The model learns to assign INLINEFORM9 when sentence INLINEFORM10 is more relevant than INLINEFORM11 . Model parameters are denoted by INLINEFORM12 . We estimate INLINEFORM13 using a neural network model and assemble a summary INLINEFORM14 by selecting INLINEFORM15 sentences with top INLINEFORM16 scores.",
"Our architecture resembles those previously proposed in the literature BIBREF9 , BIBREF10 , BIBREF11 . The main components include a sentence encoder, a document encoder, and a sentence extractor (see the left block of Figure FIGREF2 ) which we describe in more detail below."
],
[
"Previous work optimizes summarization models by maximizing INLINEFORM0 , the likelihood of the ground-truth labels y = INLINEFORM1 for sentences INLINEFORM2 , given document D and model parameters INLINEFORM3 . This objective can be achieved by minimizing the cross-entropy loss at each decoding step: DISPLAYFORM0 ",
"Cross-entropy training leads to two kinds of discrepancies in the model. The first discrepancy comes from the disconnect between the task definition and the training objective. While MLE in Equation ( EQREF7 ) aims to maximize the likelihood of the ground-truth labels, the model is (a) expected to rank sentences to generate a summary and (b) evaluated using INLINEFORM0 at test time. The second discrepancy comes from the reliance on ground-truth labels. Document collections for training summarization systems do not naturally contain labels indicating which sentences should be extracted. Instead, they are typically accompanied by abstractive summaries from which sentence-level labels are extrapolated. jp-acl16 follow woodsend-acl10 in adopting a rule-based method which assigns labels to each sentence in the document individually based on their semantic correspondence with the gold summary (see the fourth column in Table TABREF6 ). An alternative method BIBREF26 , BIBREF27 , BIBREF10 identifies the set of sentences which collectively gives the highest ROUGE with respect to the gold summary. Sentences in this set are labeled with 1 and 0 otherwise (see the column 5 in Table TABREF6 ).",
"Labeling sentences individually often generates too many positive labels causing the model to overfit the data. For example, the document in Table TABREF6 has 12 positively labeled sentences out of 31 in total (only first 10 are shown). Collective labels present a better alternative since they only pertain to the few sentences deemed most suitable to form the summary. However, a model trained with cross-entropy loss on collective labels will underfit the data as it will only maximize probabilities INLINEFORM0 for sentences in this set (e.g., sentences INLINEFORM1 in Table TABREF6 ) and ignore all other sentences. We found that there are many candidate summaries with high ROUGE scores which could be considered during training.",
"Table TABREF6 (last column) shows candidate summaries ranked according to the mean of ROUGE-1, ROUGE-2, and ROUGE-L F INLINEFORM0 scores. Interestingly, multiple top ranked summaries have reasonably high ROUGE scores. For example, the average ROUGE for the summaries ranked second (0,13), third (11,13), and fourth (0,1,13) is 57.5%, 57.2%, and 57.1%, and all top 16 summaries have ROUGE scores more or equal to 50%. A few sentences are indicative of important content and appear frequently in the summaries: sentence 13 occurs in all summaries except one, while sentence 0 appears in several summaries too. Also note that summaries (11,13) and (1,13) yield better ROUGE scores compared to longer summaries, and may be as informative, yet more concise, alternatives.",
"These discrepancies render the model less efficient at ranking sentences for the summarization task. Instead of maximizing the likelihood of the ground-truth labels, we could train the model to predict the individual ROUGE score for each sentence in the document and then select the top INLINEFORM0 sentences with highest scores. But sentences with individual ROUGE scores do not necessarily lead to a high scoring summary, e.g., they may convey overlapping content and form verbose and redundant summaries. For example, sentence 3, despite having a high individual ROUGE score (35.6%), does not occur in any of the top 5 summaries. We next explain how we address these issues using reinforcement learning."
],
[
"Reinforcement learning BIBREF28 has been proposed as a way of training sequence-to-sequence generation models in order to directly optimize the metric used at test time, e.g., BLEU or ROUGE BIBREF29 . We adapt reinforcement learning to our formulation of extractive summarization to rank sentences for summary generation. We propose an objective function that combines the maximum-likelihood cross-entropy loss with rewards from policy gradient reinforcement learning to globally optimize ROUGE. Our training algorithm allows to explore the space of possible summaries, making our model more robust to unseen data. As a result, reinforcement learning helps extractive summarization in two ways: (a) it directly optimizes the evaluation metric instead of maximizing the likelihood of the ground-truth labels and (b) it makes our model better at discriminating among sentences; a sentence is ranked high for selection if it often occurs in high scoring summaries."
],
[
"We cast the neural summarization model introduced in Figure FIGREF2 in the Reinforcement Learning paradigm BIBREF28 . Accordingly, the model can be viewed as an “agent” which interacts with an “environment” consisting of documents. At first, the agent is initialized randomly, it reads document D and predicts a relevance score for each sentence INLINEFORM0 using “policy” INLINEFORM1 , where INLINEFORM2 are model parameters. Once the agent is done reading the document, a summary with labels INLINEFORM3 is sampled out of the ranked sentences. The agent is then given a “reward” INLINEFORM4 commensurate with how well the extract resembles the gold-standard summary. Specifically, as reward function we use mean F INLINEFORM5 of INLINEFORM6 , INLINEFORM7 , and INLINEFORM8 . Unigram and bigram overlap ( INLINEFORM9 and INLINEFORM10 ) are meant to assess informativeness, whereas the longest common subsequence ( INLINEFORM11 ) is meant to assess fluency. We update the agent using the REINFORCE algorithm BIBREF15 which aims to minimize the negative expected reward: DISPLAYFORM0 ",
"where, INLINEFORM0 stands for INLINEFORM1 . REINFORCE is based on the observation that the expected gradient of a non-differentiable reward function (ROUGE, in our case) can be computed as follows: DISPLAYFORM0 ",
"While MLE in Equation ( EQREF7 ) aims to maximize the likelihood of the training data, the objective in Equation ( EQREF9 ) learns to discriminate among sentences with respect to how often they occur in high scoring summaries."
],
[
"Computing the expectation term in Equation ( EQREF10 ) is prohibitive, since there is a large number of possible extracts. In practice, we approximate the expected gradient using a single sample INLINEFORM0 from INLINEFORM1 for each training example in a batch: DISPLAYFORM0 ",
"Presented in its original form, the REINFORCE algorithm starts learning with a random policy which can make model training challenging for complex tasks like ours where a single document can give rise to a very large number of candidate summaries. We therefore limit the search space of INLINEFORM0 in Equation ( EQREF12 ) to the set of largest probability samples INLINEFORM1 . We approximate INLINEFORM2 by the INLINEFORM3 extracts which receive highest ROUGE scores. More concretely, we assemble candidate summaries efficiently by first selecting INLINEFORM4 sentences from the document which on their own have high ROUGE scores. We then generate all possible combinations of INLINEFORM5 sentences subject to maximum length INLINEFORM6 and evaluate them against the gold summary. Summaries are ranked according to F INLINEFORM7 by taking the mean of INLINEFORM8 , INLINEFORM9 , and INLINEFORM10 . INLINEFORM11 contains these top INLINEFORM12 candidate summaries. During training, we sample INLINEFORM13 from INLINEFORM14 instead of INLINEFORM15 .",
"ranzato-arxiv15-bias proposed an alternative to REINFORCE called MIXER (Mixed Incremental Cross-Entropy Reinforce) which first pretrains the model with the cross-entropy loss using ground truth labels and then follows a curriculum learning strategy BIBREF30 to gradually teach the model to produce stable predictions on its own. In our experiments MIXER performed worse than the model of nallapati17 just trained on collective labels. We conjecture that this is due to the unbounded nature of our ranking problem. Recall that our model assigns relevance scores to sentences rather than words. The space of sentential representations is vast and fairly unconstrained compared to other prediction tasks operating with fixed vocabularies BIBREF31 , BIBREF7 , BIBREF32 . Moreover, our approximation of the gradient allows the model to converge much faster to an optimal policy. Advantageously, we do not require an online reward estimator, we pre-compute INLINEFORM0 , which leads to a significant speedup during training compared to MIXER BIBREF29 and related training schemes BIBREF33 ."
],
[
"In this section we present our experimental setup for assessing the performance of our model which we call Refresh as a shorthand for REinFoRcement Learning-based Extractive Summarization. We describe our datasets, discuss implementation details, our evaluation protocol, and the systems used for comparison."
],
[
"We report results using automatic metrics in Table TABREF20 . The top part of the table compares Refresh against related extractive systems. The bottom part reports the performance of abstractive systems. We present three variants of Lead, one is computed by ourselves and the other two are reported in nallapati17 and see-acl17. Note that they vary slightly due to differences in the preprocessing of the data. We report results on the CNN and DailyMail datasets and their combination (CNN INLINEFORM0 DailyMail)."
],
[
"Traditional summarization methods manually define features to rank sentences for their salience in order to identify the most important sentences in a document or set of documents BIBREF42 , BIBREF43 , BIBREF44 , BIBREF45 , BIBREF46 , BIBREF47 . A vast majority of these methods learn to score each sentence independently BIBREF48 , BIBREF49 , BIBREF50 , BIBREF51 , BIBREF52 , BIBREF53 , BIBREF54 and a summary is generated by selecting top-scored sentences in a way that is not incorporated into the learning process. Summary quality can be improved heuristically, BIBREF55 , via max-margin methods BIBREF56 , BIBREF57 , or integer-linear programming BIBREF58 , BIBREF59 , BIBREF60 , BIBREF61 , BIBREF62 .",
"Recent deep learning methods BIBREF63 , BIBREF64 , BIBREF9 , BIBREF10 learn continuous features without any linguistic preprocessing (e.g., named entities). Like traditional methods, these approaches also suffer from the mismatch between the learning objective and the evaluation criterion (e.g., ROUGE) used at the test time. In comparison, our neural model globally optimizes the ROUGE evaluation metric through a reinforcement learning objective: sentences are highly ranked if they occur in highly scoring summaries.",
"Reinforcement learning has been previously used in the context of traditional multi-document summarization as a means of selecting a sentence or a subset of sentences from a document cluster. Ryang:2012 cast the sentence selection task as a search problem. Their agent observes a state (e.g., a candidate summary), executes an action (a transition operation that produces a new state selecting a not-yet-selected sentence), and then receives a delayed reward based on INLINEFORM0 . Follow-on work BIBREF65 extends this approach by employing ROUGE as part of the reward function, while hens-gscl15 further experiment with INLINEFORM1 -learning. MollaAliod:2017:ALTA2017 has adapt this approach to query-focused summarization. Our model differs from these approaches both in application and formulation. We focus solely on extractive summarization, in our case states are documents (not summaries) and actions are relevance scores which lead to sentence ranking (not sentence-to-sentence transitions). Rather than employing reinforcement learning for sentence selection, our algorithm performs sentence ranking using ROUGE as the reward function.",
"The REINFORCE algorithm BIBREF15 has been shown to improve encoder-decoder text-rewriting systems by allowing to directly optimize a non-differentiable objective BIBREF29 , BIBREF31 , BIBREF7 or to inject task-specific constraints BIBREF32 , BIBREF66 . However, we are not aware of any attempts to use reinforcement learning for training a sentence ranker in the context of extractive summarization."
],
[
"In this work we developed an extractive summarization model which is globally trained by optimizing the ROUGE evaluation metric. Our training algorithm explores the space of candidate summaries while learning to optimize a reward function which is relevant for the task at hand. Experimental results show that reinforcement learning offers a great means to steer our model towards generating informative, fluent, and concise summaries outperforming state-of-the-art extractive and abstractive systems on the CNN and DailyMail datasets. In the future we would like to focus on smaller discourse units BIBREF67 rather than individual sentences, modeling compression and extraction jointly."
]
],
"section_name": [
"Introduction",
"Summarization as Sentence Ranking",
"The Pitfalls of Cross-Entropy Loss",
"Sentence Ranking with Reinforcement Learning",
"Policy Learning",
"Training with High Probability Samples",
"Experimental Setup",
"Results",
"Related Work",
"Conclusions"
]
} | {
"answers": [
{
"annotation_id": [
"90cc43d0329755305fa16ce1a0037573a6defee3",
"e6f91b853db0ba3f94e268a13246d23d48ce657c"
],
"answer": [
{
"evidence": [
"We report results on the CNN and DailyMail news highlights datasets BIBREF14 which have been recently used as testbeds for the evaluation of neural summarization systems. Experimental results show that when evaluated automatically (in terms of ROUGE), our model outperforms state-of-the-art extractive and abstractive systems. We also conduct two human evaluations in order to assess (a) which type of summary participants prefer (we compare extractive and abstractive systems) and (b) how much key information from the document is preserved in the summary (we ask participants to answer questions pertaining to the content in the document by reading system summaries). Both evaluations overwhelmingly show that human subjects find our summaries more informative and complete."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"We also conduct two human evaluations in order to assess (a) which type of summary participants prefer (we compare extractive and abstractive systems) and (b) how much key information from the document is preserved in the summary (we ask participants to answer questions pertaining to the content in the document by reading system summaries)."
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"d9a07962d94f4a9878ba7343a6f3eb9c23983c51"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"a8c41b1757485d7e2ad6878e0d4f5a7a301aac54",
"e29474824036f7ff279948e6e2d1f204c633eba5"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [
"Experimental Setup",
"In this section we present our experimental setup for assessing the performance of our model which we call Refresh as a shorthand for REinFoRcement Learning-based Extractive Summarization. We describe our datasets, discuss implementation details, our evaluation protocol, and the systems used for comparison."
],
"extractive_spans": [],
"free_form_answer": "Answer with content missing: (Experimental Setup missing subsections)\nTo be selected: We compared REFRESH against a baseline which simply selects the first m leading sentences from each document (LEAD) and two neural models similar to ours (see left block in Figure 1), both trained with cross-entropy loss.\nAnswer: LEAD",
"highlighted_evidence": [
"Experimental Setup\nIn this section we present our experimental setup for assessing the performance of our model which we call Refresh as a shorthand for REinFoRcement Learning-based Extractive Summarization. We describe our datasets, discuss implementation details, our evaluation protocol, and the systems used for comparison."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"",
"",
""
],
"paper_read": [
"",
"",
""
],
"question": [
"Do they use other evaluation metrics besides ROUGE?",
"What is their ROUGE score?",
"What are the baselines?"
],
"question_id": [
"f03112b868b658c954db62fc64430bebbaa7d9e0",
"5152b78f5dfee26f1b13f221c1405ffa9b9ba3a4",
"a6d3e57de796172c236e33a6ceb4cca793dc2315"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"",
"",
""
]
} | {
"caption": [
"Figure 1: Extractive summarization model with reinforcement learning: a hierarchical encoder-decoder model ranks sentences for their extract-worthiness and a candidate summary is assembled from the top ranked sentences; the REWARD generator compares the candidate against the gold summary to give a reward which is used in the REINFORCE algorithm (Williams, 1992) to update the model.",
"Table 1: An abridged CNN article (only first 15 out of 31 sentences are shown) and its “story highlights”. The latter are typically written by journalists to allow readers to quickly gather information on stories. Highlights are often used as gold standard abstractive summaries in the summarization literature.",
"Figure 2: Summaries produced by the LEAD baseline, the abstractive system of See et al. (2017) and REFRESH for a CNN (test) article. GOLD presents the human-authored summary; the bottom block shows manually written questions using the gold summary and their answers in parentheses.",
"Table 2: Results on the CNN and DailyMail test sets. We report ROUGE-1 (R1), ROUGE-2 (R2), and ROUGE-L (RL) F1 scores. Extractive systems are in the first block and abstractive systems in the second. Table cells are filled with — whenever results are not available. Models marked with ∗ are not directly comparable to ours as they are based on an anonymized version of the dataset.",
"Table 3: System ranking and QA-based evaluations. Rankings (1st, 2nd, 3rd and 4th) are shown as proportions. Rank 1 is the best and Rank 4, the worst. The column QA shows the percentage of questions that participants answered correctly by reading system summaries."
],
"file": [
"4-Figure1-1.png",
"5-Table1-1.png",
"7-Figure2-1.png",
"9-Table2-1.png",
"9-Table3-1.png"
]
} | [
"What are the baselines?"
] | [
[
"1802.08636-Experimental Setup-0"
]
] | [
"Answer with content missing: (Experimental Setup missing subsections)\nTo be selected: We compared REFRESH against a baseline which simply selects the first m leading sentences from each document (LEAD) and two neural models similar to ours (see left block in Figure 1), both trained with cross-entropy loss.\nAnswer: LEAD"
] | 205 |
2001.07820 | Elephant in the Room: An Evaluation Framework for Assessing Adversarial Examples in NLP | An adversarial example is an input transformed by small perturbations that machine learning models consistently misclassify. While there are a number of methods proposed to generate adversarial examples for text data, it is not trivial to assess the quality of these adversarial examples, as minor perturbations (such as changing a word in a sentence) can lead to a significant shift in their meaning, readability and classification label. In this paper, we propose an evaluation framework to assess the quality of adversarial examples based on the aforementioned properties. We experiment with five benchmark attacking methods and an alternative approach based on an auto-encoder, and found that these methods generate adversarial examples with poor readability and content preservation. We also learned that there are multiple factors that can influence the attacking performance, such as the the length of text examples and the input domain. | {
"paragraphs": [
[
"Adversarial examples, a term introduced in BIBREF0, are inputs transformed by small perturbations that machine learning models consistently misclassify. The experiments are conducted in the context of computer vision (CV), and the core idea is encapsulated by an illustrative example: after imperceptible noises are added to a panda image, an image classifier predicts, with high confidence, that it is a gibbon. Interestingly, these adversarial examples can also be used to improve the classifier — either as additional training data BIBREF0 or as a regularisation objective BIBREF1 — thus providing motivation for generating effective adversarial examples.",
"The germ of this paper comes from our investigation of adversarial attack methods for natural language processing (NLP) tasks, e.g. sentiment classification, which drives us to quantify what is an “effective” or “good” adversarial example. In the context of images, a good adversarial example is typically defined according two criteria:",
"it has successfully fooled the classifier;",
"it is visually similar to the original example.",
"In NLP, defining a good adversarial example is a little more involving, because while criterion (b) can be measured with a comparable text similarity metric (e.g. BLEU or edit distance), an adversarial example should also:",
"be fluent or natural;",
"preserve its original label.",
"These two additional criteria are generally irrelevant for images, as adding minor perturbations to an image is unlikely to: (1) create an uninterpretable image (where else changing one word in a sentence can render a sentence incoherent), or (2) change how we perceive the image, say from seeing a panda to a gibbon (but a sentence's sentiment can be reversed by simply adding a negative adverb such as not). Without considering criterion (d), generating adversarial examples in NLP would be trivial, as the model can learn to simply replace a positive adjective (amazing) with a negative one (awful) to attack a sentiment classifier.",
"To the best of our knowledge, most studies on adversarial example generation in NLP have largely ignored these additional criteria BIBREF2, BIBREF3, BIBREF4, BIBREF5. We believe the lack of a rigorous evaluation framework partially explains why adversarial training for NLP models has not seen the same extent of improvement compared to CV models. As our experiments reveal, examples generated from most attacking methods are successful in fooling the classifier, but their language is often unnatural and the original label is not properly preserved.",
"The core contribution of our paper is to introduce a systematic, rigorous evaluation framework to assess the quality of adversarial examples for NLP. We focus on sentiment classification as the target task, as it is a popular application that highlights the importance of criteria discussed above. We test a number of attacking methods and also propose an alternative approach (based on an auto-encoder) for generating adversarial examples. We learn that a number of factors can influence the performance of adversarial attacks, including architecture of the classifier, sentence length and input domain."
],
[
"Most existing adversarial attack methods for text inputs are derived from those for image inputs. These methods can be categorised into three types including gradient-based attacks, optimisation-based attacks and model-based attacks.",
"Gradient-based attacks are mainly white-box attacks that rely on calculating the gradients of the target classifier with respect to the input representation. This class of attacking methods BIBREF6, BIBREF7, BIBREF6 are mainly derived from the fast gradient sign method (FGSM) BIBREF1, and it has been shown to be effective in attacking CV classifiers. However, these gradient-based methods could not be applied to text directly because perturbed word embeddings do not necessarily map to valid words. Other methods such as DeepFool BIBREF8 that rely on perturbing the word embedding space face similar roadblocks. BIBREF5 propose to use nearest neighbour search to find the closest word to the perturbed embedding.",
"Both optimisation-based and model-based attacks treat adversarial attack as an optimisation problem where the constraints are to maximise the loss of target classifiers and to minimise the difference between original and adversarial examples. Between these two, the former uses optimisation algorithms directly; while the latter trains a seperate model to generate adversarial examples and therefore involves a training process. Some of the most effective attacks for images are achieved by optimisation-based methods, such as the L-BFGS attack BIBREF1 and the C&W attack BIBREF9 in white-box attacks and the ZOO method BIBREF10 in black-box attacks. For texts, the white-box attack HotFlip BIBREF3 and black-box attack DeepWordBug BIBREF11 and TextBugger BIBREF12 are proposed in this category.",
"In a similar vein, a few model-based attacks have been proposed for images, e.g. BIBREF13 design a generative adversarial network (GAN) to generate the image perturbation from a noise map. The attacking method and target classifier typically form a single large network and the attacking method is trained using the loss from the target classifier. For this reason, it is not very straightforward to use these model-based techniques for text because there is a discontinuity in the network (since words in the adversarial examples are discrete) and so it is not fully differentiable."
],
[
"There are a number of off-the-shelf neural models for sentiment classification BIBREF14, BIBREF15, most of which are based on long-short term memory networks (LSTM) BIBREF16 or convolutional neural networks (CNN) BIBREF14. In this paper, we pre-train three sentiment classifiers: BiLSTM, BiLSTM$+$A, and CNN. These classifiers are targeted by white-box attacking methods to generate adversarial examples (detailed in Section SECREF9). BiLSTM is composed of an embedding layer that maps individual words to pre-trained word embeddings; a number of bi-directional LSTMs that capture sequential contexts; and an output layer that maps the averaged LSTM hidden states to a binary output. BiLSTM$+$A is similar to BiLSTM except it has an extra self-attention layer which learns to attend to salient words for sentiment classification, and we compute a weighted mean of the LSTM hidden states prior to the output layer. Manual inspection of the attention weights show that polarity words such as awesome and disappointed are assigned with higher weights. Finally, CNN has a number of convolutional filters of varying sizes, and their outputs are concatenated, pooled and fed to a fully-connected layer followed by a binary output layer.",
"Recent development in transformer-based pre-trained models have produced state-of-the-art performance on a range of NLP tasks BIBREF17, BIBREF18. To validate the transferability of the attacking methods, we also fine-tune a BERT classifier for black-box tests. That is, we use the adversarial examples generated for attacking the three previous classifiers (BiLSTM, BiLSTM$+$A and CNN) as test data for BERT to measure its classification performance to understand whether these adversarial examples can fool BERT."
],
[
"We experiment with five benchmark attacking methods for texts: FGM, FGVM, DeepFool BIBREF5, HotFlip BIBREF3) and TYC BIBREF4.",
"To perturb the discrete inputs, both FGM and FGVM introduce noises in the word embedding space via the fast gradient method BIBREF1 and reconstruct the input by mapping perturbed word embeddings to valid words via nearest neighbour search. Between FGM and FGVM, the former introduce noises that is proportional to the sign of the gradients while the latter introduce perturbations proportional to the gradients directly. The proportion is known as the overshoot value and denoted by $\\epsilon $. DeepFool uses the same trick to deal with discrete inputs except that, instead of using the fast gradient method, it uses the DeepFool method introduced in BIBREF8 for image to search for an optimal direction to perturb the word embeddings.",
"Unlike the previous methods, HotFlip and TYC rely on performing one or more atomic flip operations to replace words while monitoring the label change given by the target classifier. In HotFlip, the directional derivatives w.r.t. flip operations are calculated and the flip operation that results in the largest increase in loss is selected. TYC is similar to FGM, FGVM and DeepFool in that it also uses nearest neighbour search to map the perturbed embeddings to valid words, but instead of using the perturbed tokens directly, it uses greedy search or beam search to flip original tokens to perturbed ones one at a time in order of their vulnerability."
],
[
"The benchmark methods we test (Section SECREF9) are gradient-based and optimisation-based attacks (Section SECREF2). We propose an alternative model-based method to train a seperate generative model to generate text adversarial examples. Denoting the generative model as $\\mathcal {G}$, we train it to generate an adversarial example $X^{\\prime }$ given an input example $X$ such that $X^{\\prime }$ is similar to $X$ but that it changes the prediction of the target classifier $\\mathcal {D}$, i.e. $X^{\\prime } \\sim X$ and $\\mathcal {D}(X^{\\prime }) \\ne \\mathcal {D}(X)$.",
"For images, it is straightforward to combine $\\mathcal {G}$ and $\\mathcal {D}$ in the same network and use the loss of $\\mathcal {D}$ to update $\\mathcal {G}$ because $X^{\\prime }$ is continuous and can be fed to $\\mathcal {D}$ while keeping the whole network differentiable. For texts, $X^{\\prime }$ are discrete symbols (typically decoded with argmax or beam-search) and so the network is not fully differentiable. To create a fully differentiable network with $\\mathcal {G}+\\mathcal {D}$, we propose to use Gumbel-Softmax to imitate the categorical distribution BIBREF19. As the temperature ($\\tau $) of Gumbel-Softmax sampling approaches 0, the distribution of the sampled output $X^*$ is identical to $X^{\\prime }$. We illustrate this in Figure FIGREF12, where $X^*$ is fed to $\\mathcal {D}$ during training while $X^{\\prime }$ is generated as the output (i.e. the adversarial example) at test time.",
"$\\mathcal {G}$ is designed as an auto-encoder to reconstruct the input (denoted as AutoEncoder), with the following objectives to capture the evaluation criteria described in Section SECREF1:",
"$L_{adv}$, the adversarial loss that maximises the cross-entropy of $\\mathcal {D}$;",
"$L_{seq}$, the auto-encoder loss to reconstruct $X^{\\prime }$ from $X^{\\prime }$;",
"$L_{sem}$, the cosine distance between the mean embeddings of $X$ and $X^*$.",
"$L_{adv}$ ensures that the adversarial examples are fooling the classifier (criterion (a)); $L_{seq}$ regulates the adversarial example so that it isn't too different from the original input and has a reasonable language (criteria (b) and (c)); and $L_{sem}$ constrains the underlying semantic content of the original and adversarial sentences to be similar (criterion (b)); so reduces the likelihood of a sentiment flip (indirectly for criteria (d)). Note that criteria (d) is the perhaps the most difficult aspect to handle, as its interpretation is ultimately a human judgement.",
"We use two scaling hyper-parameters $\\lambda _{ae}$ and $\\lambda _{seq}$ to weigh the three objectives: $\\lambda _{ae}(\\lambda _{seq} * L_{seq} + (1-\\lambda _{seq})*L_{sem}) + (1-\\lambda _{ae})*L_{adv}$. We introduce attention layers to AutoEncoder and test both greedy and beam search decoding to improve the quality of generated adversarial examples. During training we alternate between positive-to-negative and negative-to-positive attack for different mini-batches."
],
[
"We construct three datasets based on IMDB reviews and Yelp reviews. The IMDB dataset is binarised and split into a training and test set, each with 25K reviews (2K reviews from the training set are reserved for development). We filter out any review that has more than 400 tokens, producing the final dataset (imdb400). For Yelp, we binarise the ratings, and create 2 datasets, where we keep only reviews with $\\le $ 50 tokens (yelp50) and $\\le $200 tokens (yelp200). We randomly partition both datasets into train/dev/test sets (90/5/5 for yelp50; 99/0.5/0.5 for yelp200). For all datasets, we use spaCy for tokenisation. We train and tune target classifiers (see Section SECREF8) using the training and development sets; and evaluate their performance on the original examples in the test sets as well as the adversarial examples generated by attacking methods for the test sets. Note that AutoEncoder also involves a training process, for which we train and tune AutoEncoder using the training and development sets in yelp50, yelp200 and imdb400. Statistics of the three datasets are presented in Table TABREF22. These datasets present a variation in the text lengths (e.g. the average number of words for yelp50, yelp200 and imdb400 is 34, 82 and 195 words respectively), training data size (e.g. the number of training examples for target classifiers for imdb400, yelp50 and yelp200 are 18K, 407K and 2M, respectively) and input domain (e.g. restaurant vs. movie reviews)."
],
[
"We use the pre-trained glove.840B.300d embeddings BIBREF20 for all 6 attacking methods. For FGM, FGVM and DeepFool, we tune $\\epsilon $, the overshoot hyper-parameter (Section SECREF9) and keep the iterative step $n$ static (5). For TYC, besides $\\epsilon $ we also tune the upper limit of flipped words, ranging from 10%–100% of the maximum length. For HotFlip, we tune only the upper limit of flipped words, in the range of $[1, 7]$.",
"We pre-train AutoEncoder to reconstruct sentences in different datasets as we found that this improves the quality of the generated adversarial examples. During pre-training, we tune batch size, number of layers and number of units, and stop the training after the performance on the development sets stops improving for 20K steps. The model is then initialised with the pre-trained weights and trained based on objectives defined in Section SECREF11. During the training process we tune $\\lambda _{ae}$ and $\\lambda _{seq}$ while keeping the batch size (32) and learning rate ($1e^{-4}$) fixed. As part of our preliminary study, we also tested different values for the Gumbel-softmax temperature $\\tau $ and find that $\\tau =0.1$ performs the best. Embeddings are fixed throughout all training processes.",
"For target classifiers, we tune batch size, learning rate, number of layers, number of units, attention size (BiLSTM$+$A), filter sizes and dropout probability (CNN). For BERT, we use the default fine-tuning hyper-parameter values except for batch size, where we adjust based on memory consumption. Note that after the target classifiers are trained their weights are not updated when training or testing the attacking methods."
],
[
"We propose both automatic metrics and manual evaluation strategies to assess the quality of adversarial examples, based on four criteria defined in Section SECREF1: (a) attacking performance (i.e. how well they fool the classifier); (b) textual similarity between the original input and the adversarial input; (c) fluency of the adversarial example; and (d) label preservation. Note that the automatic metrics only address the first 3 criteria (a, b and c); we contend that criterion (d) requires manual evaluation, as the judgement of whether the original label is preserved is inherently a human decision."
],
[
"As sentiment classification is our target task, we use the standard classification accuracy (ACC) to evaluate the attacking performance of adversarial examples (criterion (a)).",
"To assess the similarity between the original and (transformed) adversarial examples (criteria (b)), we compute BLEU scores BIBREF21.",
"To measure fluency, we first explore a supervised BERT model fine-tuned to predict linguistic acceptability BIBREF17. However, in preliminary experiments we found that BERT performs very poorly at predicting the acceptability of adversarial examples (e.g. it predicts word-salad-like sentences generated by FGVM as very acceptable), revealing the brittleness of these supervised models. We next explore an unsupervised approach BIBREF22, BIBREF23, using normalised sentence probabilities estimated by pre-trained language models for measuring acceptability. In the original papers, the authors tested simple recurrent language models; here we use modern pre-trained language models such as GPT-2 BIBREF24 and XLNet BIBREF18. Our final acceptability metric (ACPT) is based on normalised XLNet sentence probabilities: ${\\log P(s)} / ({((5+|s|)/(5+1))^\\alpha })$, where $s$ is the sentence, and $\\alpha $ is a hyper-parameter (set to 0.8) to dampen the impact of large values BIBREF25.",
"We only computed BLEU and ACPT scores for adversarial examples that have successfully fooled the classifier. Our rationale is that unsuccessful examples can artificially boost these scores by not making any modifications, and so the better approach is to only consider successful examples."
],
[
"We present the performance of the attacking methods against 3 target classifiers (Table TABREF23A; top) and on 3 datasets (Table TABREF23B; bottom). We choose 3 ACC thresholds for the attacking performance: T0, T1 and T2, which correspond approximately to accuracy scores of 90%, 80% and 70% for the Yelp datasets (yelp50, yelp200); and 80%, 70% and 50% for the IMDB datasets (imdb400). Each method is tuned accordingly to achieve a particular accuracy. Missing numbers (dashed lines) indicate the method is unable to produce the desired accuracy, e.g. HotFlip with only 1 word flip produces 81.5% accuracy (T1) when attacking CNN on yelp50, and so T0 accuracy is unachievable.",
"Looking at BLEU and ACPT, HotFlip is the most consistent method over multiple datasets and classifiers. AutoEncoder is also fairly competitive, producing largely comparable performance (except for the yelp200 dataset). Gradient-based methods FGM and FGVM perform very poorly. In general, they tend to produce word salad adversarial examples, as indicated by their poor BLEU scores. DeepFool similarly generates incoherent sentences with low BLEU scores, but occasionally produces good ACPT (BiLSTM at T1 and T2), suggesting potential brittleness of the unsupervised approach for evaluating acceptability.",
"Comparing the performance across different ACC thresholds, we observe a consistent pattern of decreasing performance over all metrics as the attacking performance increases from T0 to T2. These observations suggest that all methods are trading off fluency and content preservation as they attempt to generate stronger adversarial examples.",
"We now focus on Table TABREF23A, to understand the impact of model architecture for the target classifier. With 1 word flip as the upper limit for HotFlip, the accuracy of BiLSTM$+$A and BiLSTM drops to T0 (approximately 4% accuracy decrease) while the accuracy of CNN drops to T1 (approximately 13% accuracy decrease), suggesting that convolutional networks are more vulnerable to attacks (noting that it is the predominant architecture for CV). Looking at Table TABREF23B, we also find that the attacking performance is influenced by the input text length and the number of training examples for target classifiers. For HotFlip, we see improvements over BLEU and ACPT as text length increases from yelp50 to yelp200 to imdb400, indicating the performance of HotFlip is more affected by input lengths. We think this is because with more words it is more likely for HotFlip to find a vulnerable spot to target. While for TYC and AutoEncoder, we see improvements over BLEU and ACPT as the number of training examples for target classifier decreases from yelp200 (2M) to yelp50 (407K) to imdb400 (22K), indicating these two methods are less effective for attacking target classifiers that are trained with more data. Therefore, increasing the training data for target classifier may improve their robustness against adversarial examples that are generated by certain methods.",
"As a black-box test to check how well these adversarial examples generalise to fooling other classifiers, also known as transferability, we feed the adversarial examples from the 3 best methods, i.e. TYC, HotFlip and AutoEncoder, to a pre-trained BERT trained for sentiment classification (Figure FIGREF31) and measure its accuracy. Unsurprisingly, we observe that the attacking performance (i.e. the drop in ACC) is not as good as those reported for the white-box tests. Interestingly, we find that HotFlip, the best method, produces the least effective adversarial examples for BERT. Both TYC and AutoEncoder perform better here, as their generated adversarial examples do well in fooling BERT.",
"To summarise, our results demonstrate that the best white-box methods (e.g. HotFlip) may not produce adversarial examples that generalise to fooling other classifiers. We also saw that convolutional networks are more vulnerable than recurrent networks, and that dataset features such as text lengths and training data size (for target classifiers) can influence how difficult it is to perform adversarial attack."
],
[
"Automatic metrics provide a proxy to quantify the quality of the adversarial examples. To validate that these metrics work, we conduct a crowdsourcing experiment on Figure-Eight. Recall that the automatic metrics do not assess sentiment preservation (criterion (d)); we evaluate that aspect here.",
"We experiment with the 3 best methods (TYC, AutoEncoder and HotFlip) on 2 accuracy thresholds (T0 and T2), using BiLSTM$+$A as the classifier. For each method and threshold, we randomly sample 25 positive-to-negative and 25 negative-to-positive examples. To control for quality, we reserve and annotate 10% of the samples ourselves as control questions. Workers are first presented with 10 control questions as a quiz, and only those who pass the quiz with at least 80% accuracy can continue to work on the task. We display 10 questions per page, where one control question is embedded to continue monitor worker performance. The task is designed such that each control question can only be seen once per worker. We restrict our jobs to workers in United States, United Kingdoms, Australia, and Canada.",
"To evaluate the criteria discussed in Section SECREF1: (b) textual similarity, (c) fluency, and (d) sentiment preservation, we ask the annotators three questions:",
"Is snippet B a good paraphrase of snippet A?",
"$\\circledcirc $ Yes $\\circledcirc $ Somewhat yes $\\circledcirc $ No",
"How natural does the text read?",
"$\\circledcirc $ Very unnatural $\\circledcirc $ Somewhat natural $\\circledcirc $ Natural",
"What is the sentiment of the text?",
"$\\circledcirc $ Positive $\\circledcirc $ Negative $\\circledcirc $ Cannot tell",
"For question 1, we display both the adversarial input and the original input, while for question 2 and 3 we present only the adversarial example. As an upper-bound, we also run a survey on question 2 and 3 for 50 random original samples."
],
[
"We present the percentage of answers to each question in Figure FIGREF38. The green bars illustrate how well the adversarial examples paraphrase the original ones; blue how natural the adversarial examples read; and red whether the sentiment of the adversarial examples is consistent compared to the original.",
"Looking at the performance of the original sentences (“(a) Original samples”), we see that their language is largely fluent and their sentiment is generally consistent to the original examples', although it's worth noting that the review sentiment of imdb400 can be somewhat ambiguous (63% agreement). We think this is due to movie reviews being more descriptive and therefore creating potential ambiguity in their expression of sentiment.",
"On content preservation (criterion (b); green bars), all methods produce poor paraphrases on yelp50. For imdb400, however, the results are more promising; adversarial examples generated by HotFlip, in particular, are good, even at T2.",
"Next we look at fluency (criterion (c); blue bars). We see a similar trend: performance in imdb400 is substantially better than yelp50. In fact we see almost no decrease in fluency in the adversarial examples compared to the original in imdb400. In yelp50, HotFlip and AutoEncoder are fairly competitive, producing adversarial examples that are only marginally less fluent compared to the original at T0. At T2, however, these methods begin to trade off fluency. All in all, the paraphrasability and fluency surveys suggest that imdb400 is an easier dataset for adversarial experiments, and it is the predominant dataset used by most studies.",
"Lastly, we consider sentiment preservation (criterion (d); red bars). All methods perform poorly at preserving the original sentiment on both yelp50 and imdb400 datasets. The artifact is arguably more profound on shorter inputs, as the original examples in imdb400 have lower agreement in the first place (yelp50 vs. imdb400: 86% vs. 63%). Again both HotFlip and AutoEncoder are the better methods here (interestingly, we observe an increase in agreement as their attacking performance increases from T0 and T2).",
"Summarising our findings, HotFlip is generally the best method across all criteria, noting that its adversarial examples, however, have poor transferability. TYC generates good black-box adversarial examples but do not do well in terms of content preservation and fluency. AutoEncoder produces comparable results with HotFlip for meeting the four criteria and generates examples that generalise reasonably, but it is very sensitive to the increase of training examples for the target classifier. The ACPT metric appears to be effective in evaluating fluency, as we see good agreement with human evaluation. All said, we found that all methods tend to produce adversarial examples that do not preserve their original sentiments, revealing that these methods in a way “cheat” by simply flipping the sentiments of the original sentences to fool the classifier, and therefore the adversarial examples might be ineffective for adversarial training, as they are not examples that reveal potential vulnerabilities in the classifier."
],
[
"We propose an evaluation framework for assessing the quality of adversarial examples in NLP, based on four criteria: (a) attacking performance, (b) textual similarity; (c) fluency; (d) label preservation. Our framework involves both automatic and human evaluation, and we test 5 benchmark methods and a novel auto-encoder approach. We found that the architecture of the target classifier is an important factor when it comes to attacking performance, e.g. CNNs are more vulnerable than LSTMs; dataset features such as length of text, training data size (for target classifiers) and input domains are also influencing factors that affect how difficulty it is to perform adversarial attack; and the predominant dataset (IMDB) used by most studies is comparatively easy for adversarial attack. Lastly, we also observe in our human evaluation that on shorter texts (Yelp) these methods produce adversarial examples that tend not to preserve their semantic content and have low readability. More importantly, these methods also “cheat” by simply flipping the sentiment in the adversarial examples, and this behaviour is evident on both datasets, suggesting they could be ineffective for adversarial training."
]
],
"section_name": [
"Introduction",
"Related Work",
"Methodology ::: Sentiment Classifiers",
"Methodology ::: Benchmark Attacking Methods",
"Methodology ::: Model-based Attacking Method",
"Experiments ::: Datasets",
"Experiments ::: Implementation Details",
"Evaluation",
"Evaluation ::: Automatic Evaluation: Metrics",
"Evaluation ::: Automatic Evaluation: Results",
"Evaluation ::: Human Evaluation: Design",
"Evaluation ::: Human Evaluation: Results",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"e874c63467cb220f3682b632b846b6f1b1fa71d7",
"f09482daaead27516f1efb69be40fd085dce592c"
],
"answer": [
{
"evidence": [
"We construct three datasets based on IMDB reviews and Yelp reviews. The IMDB dataset is binarised and split into a training and test set, each with 25K reviews (2K reviews from the training set are reserved for development). We filter out any review that has more than 400 tokens, producing the final dataset (imdb400). For Yelp, we binarise the ratings, and create 2 datasets, where we keep only reviews with $\\le $ 50 tokens (yelp50) and $\\le $200 tokens (yelp200). We randomly partition both datasets into train/dev/test sets (90/5/5 for yelp50; 99/0.5/0.5 for yelp200). For all datasets, we use spaCy for tokenisation. We train and tune target classifiers (see Section SECREF8) using the training and development sets; and evaluate their performance on the original examples in the test sets as well as the adversarial examples generated by attacking methods for the test sets. Note that AutoEncoder also involves a training process, for which we train and tune AutoEncoder using the training and development sets in yelp50, yelp200 and imdb400. Statistics of the three datasets are presented in Table TABREF22. These datasets present a variation in the text lengths (e.g. the average number of words for yelp50, yelp200 and imdb400 is 34, 82 and 195 words respectively), training data size (e.g. the number of training examples for target classifiers for imdb400, yelp50 and yelp200 are 18K, 407K and 2M, respectively) and input domain (e.g. restaurant vs. movie reviews)."
],
"extractive_spans": [
"three datasets based on IMDB reviews and Yelp reviews"
],
"free_form_answer": "",
"highlighted_evidence": [
"We construct three datasets based on IMDB reviews and Yelp reviews. The IMDB dataset is binarised and split into a training and test set, each with 25K reviews (2K reviews from the training set are reserved for development).",
"For Yelp, we binarise the ratings, and create 2 datasets, where we keep only reviews with $\\le $ 50 tokens (yelp50) and $\\le $200 tokens (yelp200)."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We construct three datasets based on IMDB reviews and Yelp reviews. The IMDB dataset is binarised and split into a training and test set, each with 25K reviews (2K reviews from the training set are reserved for development). We filter out any review that has more than 400 tokens, producing the final dataset (imdb400). For Yelp, we binarise the ratings, and create 2 datasets, where we keep only reviews with $\\le $ 50 tokens (yelp50) and $\\le $200 tokens (yelp200). We randomly partition both datasets into train/dev/test sets (90/5/5 for yelp50; 99/0.5/0.5 for yelp200). For all datasets, we use spaCy for tokenisation. We train and tune target classifiers (see Section SECREF8) using the training and development sets; and evaluate their performance on the original examples in the test sets as well as the adversarial examples generated by attacking methods for the test sets. Note that AutoEncoder also involves a training process, for which we train and tune AutoEncoder using the training and development sets in yelp50, yelp200 and imdb400. Statistics of the three datasets are presented in Table TABREF22. These datasets present a variation in the text lengths (e.g. the average number of words for yelp50, yelp200 and imdb400 is 34, 82 and 195 words respectively), training data size (e.g. the number of training examples for target classifiers for imdb400, yelp50 and yelp200 are 18K, 407K and 2M, respectively) and input domain (e.g. restaurant vs. movie reviews)."
],
"extractive_spans": [],
"free_form_answer": "1 IMDB dataset and 2 Yelp datasets",
"highlighted_evidence": [
"We construct three datasets based on IMDB reviews and Yelp reviews. The IMDB dataset is binarised and split into a training and test set, each with 25K reviews (2K reviews from the training set are reserved for development). We filter out any review that has more than 400 tokens, producing the final dataset (imdb400). For Yelp, we binarise the ratings, and create 2 datasets, where we keep only reviews with $\\le $ 50 tokens (yelp50) and $\\le $200 tokens (yelp200)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe"
]
},
{
"annotation_id": [
"bd24646263a10c8bbf01180fabf3bf6cdaa28b05"
],
"answer": [
{
"evidence": [
"The core contribution of our paper is to introduce a systematic, rigorous evaluation framework to assess the quality of adversarial examples for NLP. We focus on sentiment classification as the target task, as it is a popular application that highlights the importance of criteria discussed above. We test a number of attacking methods and also propose an alternative approach (based on an auto-encoder) for generating adversarial examples. We learn that a number of factors can influence the performance of adversarial attacks, including architecture of the classifier, sentence length and input domain."
],
"extractive_spans": [
"architecture of the classifier",
"sentence length",
" input domain"
],
"free_form_answer": "",
"highlighted_evidence": [
"We learn that a number of factors can influence the performance of adversarial attacks, including architecture of the classifier, sentence length and input domain."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"bf9bc0b08739ca2d27b27cfa485994534e4d6be3",
"e1edfef0d21cac3b9d6d455c34f89341492e7c78"
],
"answer": [
{
"evidence": [
"We experiment with five benchmark attacking methods for texts: FGM, FGVM, DeepFool BIBREF5, HotFlip BIBREF3) and TYC BIBREF4."
],
"extractive_spans": [
"FGM, FGVM, DeepFool BIBREF5, HotFlip BIBREF3) and TYC BIBREF4"
],
"free_form_answer": "",
"highlighted_evidence": [
"We experiment with five benchmark attacking methods for texts: FGM, FGVM, DeepFool BIBREF5, HotFlip BIBREF3) and TYC BIBREF4."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We experiment with five benchmark attacking methods for texts: FGM, FGVM, DeepFool BIBREF5, HotFlip BIBREF3) and TYC BIBREF4."
],
"extractive_spans": [
"FGM",
"FGVM",
"DeepFool",
"HotFlip",
"TYC"
],
"free_form_answer": "",
"highlighted_evidence": [
"We experiment with five benchmark attacking methods for texts: FGM, FGVM, DeepFool BIBREF5, HotFlip BIBREF3) and TYC BIBREF4."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"two",
"two",
"two"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"What datasets do they use?",
"What other factors affect the performance?",
"What are the benchmark attacking methods?"
],
"question_id": [
"395b61d368e8766014aa960fde0192e4196bcb85",
"92bb41cf7bd1f7886784796a8220ed5aa07bc49b",
"4ef11518b40cc55d86c485f14e24732123b0d907"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: Framework to generate adversarial examples.",
"Table 1: Dataset statistics.",
"Table 2: Results based on automatic metrics. Top half (A) presents 3 different target classifiers evaluated on one dataset (yelp50); bottom half (B) tests 3 datasets using one classifier (BiLSTM+A). For ACPT, less negative values are better. Boldface indicates optimal performance in each column.",
"Figure 2: Accuracy of BERT on adversarial examples generated from TYC, HOTFLIP and AUTOENCODER for different target classifiers (top row) and for different datasets (bottom row). BERT’s accuracy on original examples are denoted as red line in each plot.",
"Figure 3: Human evaluation results."
],
"file": [
"3-Figure1-1.png",
"3-Table1-1.png",
"4-Table2-1.png",
"5-Figure2-1.png",
"6-Figure3-1.png"
]
} | [
"What datasets do they use?"
] | [
[
"2001.07820-Experiments ::: Datasets-0"
]
] | [
"1 IMDB dataset and 2 Yelp datasets"
] | 206 |
2002.01320 | CoVoST: A Diverse Multilingual Speech-To-Text Translation Corpus | Spoken language translation has recently witnessed a resurgence in popularity, thanks to the development of end-to-end models and the creation of new corpora, such as Augmented LibriSpeech and MuST-C. Existing datasets involve language pairs with English as a source language, involve very specific domains or are low resource. We introduce CoVoST, a multilingual speech-to-text translation corpus from 11 languages into English, diversified with over 11,000 speakers and over 60 accents. We describe the dataset creation methodology and provide empirical evidence of the quality of the data. We also provide initial benchmarks, including, to our knowledge, the first end-to-end many-to-one multilingual models for spoken language translation. CoVoST is released under CC0 license and free to use. We also provide additional evaluation data derived from Tatoeba under CC licenses. | {
"paragraphs": [
[
"End-to-end speech-to-text translation (ST) has attracted much attention recently BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6 given its simplicity against cascading automatic speech recognition (ASR) and machine translation (MT) systems. The lack of labeled data, however, has become a major blocker for bridging the performance gaps between end-to-end models and cascading systems. Several corpora have been developed in recent years. post2013improved introduced a 38-hour Spanish-English ST corpus by augmenting the transcripts of the Fisher and Callhome corpora with English translations. di-gangi-etal-2019-must created the largest ST corpus to date from TED talks but the language pairs involved are out of English only. beilharz2019librivoxdeen created a 110-hour German-English ST corpus from LibriVox audiobooks. godard-etal-2018-low created a Moboshi-French ST corpus as part of a rare language documentation effort. woldeyohannis provided an Amharic-English ST corpus in the tourism domain. boito2019mass created a multilingual ST corpus involving 8 languages from a multilingual speech corpus based on Bible readings BIBREF7. Previous work either involves language pairs out of English, very specific domains, very low resource languages or a limited set of language pairs. This limits the scope of study, including the latest explorations on end-to-end multilingual ST BIBREF8, BIBREF9. Our work is mostly similar and concurrent to iranzosnchez2019europarlst who created a multilingual ST corpus from the European Parliament proceedings. The corpus we introduce has larger speech durations and more translation tokens. It is diversified with multiple speakers per transcript/translation. Finally, we provide additional out-of-domain test sets.",
"In this paper, we introduce CoVoST, a multilingual ST corpus based on Common Voice BIBREF10 for 11 languages into English, diversified with over 11,000 speakers and over 60 accents. It includes a total 708 hours of French (Fr), German (De), Dutch (Nl), Russian (Ru), Spanish (Es), Italian (It), Turkish (Tr), Persian (Fa), Swedish (Sv), Mongolian (Mn) and Chinese (Zh) speeches, with French and German ones having the largest durations among existing public corpora. We also collect an additional evaluation corpus from Tatoeba for French, German, Dutch, Russian and Spanish, resulting in a total of 9.3 hours of speech. Both corpora are created at the sentence level and do not require additional alignments or segmentation. Using the official Common Voice train-development-test split, we also provide baseline models, including, to our knowledge, the first end-to-end many-to-one multilingual ST models. CoVoST is released under CC0 license and free to use. The Tatoeba evaluation samples are also available under friendly CC licenses. All the data can be acquired at https://github.com/facebookresearch/covost."
],
[
"Common Voice BIBREF10 is a crowdsourcing speech recognition corpus with an open CC0 license. Contributors record voice clips by reading from a bank of donated sentences. Each voice clip was validated by at least two other users. Most of the sentences are covered by multiple speakers, with potentially different genders, age groups or accents.",
"Raw CoVo data contains samples that passed validation as well as those that did not. To build CoVoST, we only use the former one and reuse the official train-development-test partition of the validated data. As of January 2020, the latest CoVo 2019-06-12 release includes 29 languages. CoVoST is currently built on that release and covers the following 11 languages: French, German, Dutch, Russian, Spanish, Italian, Turkish, Persian, Swedish, Mongolian and Chinese.",
"Validated transcripts were sent to professional translators. Note that the translators had access to the transcripts but not the corresponding voice clips since clips would not carry additional information. Since transcripts were duplicated due to multiple speakers, we deduplicated the transcripts before sending them to translators. As a result, different voice clips of the same content (transcript) will have identical translations in CoVoST for train, development and test splits.",
"In order to control the quality of the professional translations, we applied various sanity checks to the translations BIBREF11. 1) For German-English, French-English and Russian-English translations, we computed sentence-level BLEU BIBREF12 with the NLTK BIBREF13 implementation between the human translations and the automatic translations produced by a state-of-the-art system BIBREF14 (the French-English system was a Transformer big BIBREF15 separately trained on WMT14). We applied this method to these three language pairs only as we are confident about the quality of the corresponding systems. Translations with a score that was too low were manually inspected and sent back to the translators when needed. 2) We manually inspected examples where the source transcript was identical to the translation. 3) We measured the perplexity of the translations using a language model trained on a large amount of clean monolingual data BIBREF14. We manually inspected examples where the translation had a high perplexity and sent them back to translators accordingly. 4) We computed the ratio of English characters in the translations. We manually inspected examples with a low ratio and sent them back to translators accordingly. 5) Finally, we used VizSeq BIBREF16 to calculate similarity scores between transcripts and translations based on LASER cross-lingual sentence embeddings BIBREF17. Samples with low scores were manually inspected and sent back for translation when needed.",
"We also sanity check the overlaps of train, development and test sets in terms of transcripts and voice clips (via MD5 file hashing), and confirm they are totally disjoint."
],
[
"Tatoeba (TT) is a community built language learning corpus having sentences aligned across multiple languages with the corresponding speech partially available. Its sentences are on average shorter than those in CoVoST (see also Table TABREF2) given the original purpose of language learning. Sentences in TT are licensed under CC BY 2.0 FR and part of the speeches are available under various CC licenses.",
"We construct an evaluation set from TT (for French, German, Dutch, Russian and Spanish) as a complement to CoVoST development and test sets. We collect (speech, transcript, English translation) triplets for the 5 languages and do not include those whose speech has a broken URL or is not CC licensed. We further filter these samples by sentence lengths (minimum 4 words including punctuations) to reduce the portion of short sentences. This makes the resulting evaluation set closer to real-world scenarios and more challenging.",
"We run the same quality checks for TT as for CoVoST but we do not find poor quality translations according to our criteria. Finally, we report the overlap between CoVo transcripts and TT sentences in Table TABREF5. We found a minimal overlap, which makes the TT evaluation set a suitable additional test set when training on CoVoST."
],
[
"Basic statistics for CoVoST and TT are listed in Table TABREF2 including (unique) sentence counts, speech durations, speaker demographics (partially available) as well as vocabulary and token statistics (based on Moses-tokenized sentences by sacreMoses) on both transcripts and translations. We see that CoVoST has over 327 hours of German speeches and over 171 hours of French speeches, which, to our knowledge, corresponds to the largest corpus among existing public ST corpora (the second largest is 110 hours BIBREF18 for German and 38 hours BIBREF19 for French). Moreover, CoVoST has a total of 18 hours of Dutch speeches, to our knowledge, contributing the first public Dutch ST resource. CoVoST also has around 27-hour Russian speeches, 37-hour Italian speeches and 67-hour Persian speeches, which is 1.8 times, 2.5 times and 13.3 times of the previous largest public one BIBREF7. Most of the sentences (transcripts) in CoVoST are covered by multiple speakers with potentially different accents, resulting in a rich diversity in the speeches. For example, there are over 1,000 speakers and over 10 accents in the French and German development / test sets. This enables good coverage of speech variations in both model training and evaluation."
],
[
"As we can see from Table TABREF2, CoVoST is diversified with a rich set of speakers and accents. We further inspect the speaker demographics in terms of sample distributions with respect to speaker counts, accent counts and age groups, which is shown in Figure FIGREF6, FIGREF7 and FIGREF8. We observe that for 8 of the 11 languages, at least 60% of the sentences (transcripts) are covered by multiple speakers. Over 80% of the French sentences have at least 3 speakers. And for German sentences, even over 90% of them have at least 5 speakers. Similarly, we see that a large portion of sentences are spoken in multiple accents for French, German, Dutch and Spanish. Speakers of each language also spread widely across different age groups (below 20, 20s, 30s, 40s, 50s, 60s and 70s)."
],
[
"We provide baselines using the official train-development-test split on the following tasks: automatic speech recognition (ASR), machine translation (MT) and speech translation (ST)."
],
[
"We convert raw MP3 audio files from CoVo and TT into mono-channel waveforms, and downsample them to 16,000 Hz. For transcripts and translations, we normalize the punctuation, we tokenize the text with sacreMoses and lowercase it. For transcripts, we further remove all punctuation markers except for apostrophes. We use character vocabularies on all the tasks, with 100% coverage of all the characters. Preliminary experimentation showed that character vocabularies provided more stable training than BPE. For MT, the vocabulary is created jointly on both transcripts and translations. We extract 80-channel log-mel filterbank features, computed with a 25ms window size and 10ms window shift using torchaudio. The features are normalized to 0 mean and 1.0 standard deviation. We remove samples having more than 3,000 frames or more than 256 characters for GPU memory efficiency (less than 25 samples are removed for all languages)."
],
[
"Our ASR and ST models follow the architecture in berard2018end, but have 3 decoder layers like that in pino2019harnessing. For MT, we use a Transformer base architecture BIBREF15, but with 3 encoder layers, 3 decoder layers and 0.3 dropout. We use a batch size of 10,000 frames for ASR and ST, and a batch size of 4,000 tokens for MT. We train all models using Fairseq BIBREF20 for up to 200,000 updates. We use SpecAugment BIBREF21 for ASR and ST to alleviate overfitting."
],
[
"We use a beam size of 5 for all models. We use the best checkpoint by validation loss for MT, and average the last 5 checkpoints for ASR and ST. For MT and ST, we report case-insensitive tokenized BLEU BIBREF22 using sacreBLEU BIBREF23. For ASR, we report word error rate (WER) and character error rate (CER) using VizSeq."
],
[
"For simplicity, we use the same model architecture for ASR and ST, although we do not leverage ASR models to pretrain ST model encoders later. Table TABREF18 shows the word error rate (WER) and character error rate (CER) for ASR models. We see that French and German perform the best given they are the two highest resource languages in CoVoST. The other languages are relatively low resource (especially Turkish and Swedish) and the ASR models are having difficulties to learn from this data."
],
[
"MT models take transcripts (without punctuation) as inputs and outputs translations (with punctuation). For simplicity, we do not change the text preprocessing methods for MT to correct this mismatch. Moreover, this mismatch also exists in cascading ST systems, where MT model inputs are the outputs of an ASR model. Table TABREF20 shows the BLEU scores of MT models. We notice that the results are consistent with what we see from ASR models. For example thanks to abundant training data, French has a decent BLEU score of 29.8/25.4. German doesn't perform well, because of less richness of content (transcripts). The other languages are low resource in CoVoST and it is difficult to train decent models without additional data or pre-training techniques."
],
[
"CoVoST is a many-to-one multilingual ST corpus. While end-to-end one-to-many and many-to-many multilingual ST models have been explored very recently BIBREF8, BIBREF9, many-to-one multilingual models, to our knowledge, have not. We hence use CoVoST to examine this setting. Table TABREF22 and TABREF23 show the BLEU scores for both bilingual and multilingual end-to-end ST models trained on CoVoST. We observe that combining speeches from multiple languages is consistently bringing gains to low-resource languages (all besides French and German). This includes combinations of distant languages, such as Ru+Fr, Tr+Fr and Zh+Fr. Moreover, some combinations do bring gains to high-resource language (French) as well: Es+Fr, Tr+Fr and Mn+Fr. We simply provide the most basic many-to-one multilingual baselines here, and leave the full exploration of the best configurations to future work. Finally, we note that for some language pairs, absolute BLEU numbers are relatively low as we restrict model training to the supervised data. We encourage the community to improve upon those baselines, for example by leveraging semi-supervised training."
],
[
"In CoVoST, large portion of transcripts are covered by multiple speakers with different genders, accents and age groups. Besides the standard corpus-level BLEU scores, we also want to evaluate model output variance on the same content (transcript) but different speakers. We hence propose to group samples (and their sentence BLEU scores) by transcript, and then calculate average per-group mean and average coefficient of variation defined as follows:",
"and",
"where $G$ is the set of sentence BLEU scores grouped by transcript and $G^{\\prime } = \\lbrace g | g\\in G, |g|>1, \\textrm {Mean}(g) > 0 \\rbrace $.",
"$\\textrm {BLEU}_{MS}$ provides a normalized quality score as oppose to corpus-level BLEU or unnormalized average of sentence BLEU. And $\\textrm {CoefVar}_{MS}$ is a standardized measure of model stability against different speakers (the lower the better). Table TABREF24 shows the $\\textrm {BLEU}_{MS}$ and $\\textrm {CoefVar}_{MS}$ of our ST models on CoVoST test set. We see that German and Persian have the worst $\\textrm {CoefVar}_{MS}$ (least stable) given their rich speaker diversity in the test set and relatively small train set (see also Figure FIGREF6 and Table TABREF2). Dutch also has poor $\\textrm {CoefVar}_{MS}$ because of the lack of training data. Multilingual models are consistantly more stable on low-resource languages. Ru+Fr, Tr+Fr, Fa+Fr and Zh+Fr even have better $\\textrm {CoefVar}_{MS}$ than all individual languages."
],
[
"We introduce a multilingual speech-to-text translation corpus, CoVoST, for 11 languages into English, diversified with over 11,000 speakers and over 60 accents. We also provide baseline results, including, to our knowledge, the first end-to-end many-to-one multilingual model for spoken language translation. CoVoST is free to use with a CC0 license, and the additional Tatoeba evaluation samples are also CC-licensed."
]
],
"section_name": [
"Introduction",
"Data Collection and Processing ::: Common Voice (CoVo)",
"Data Collection and Processing ::: Tatoeba (TT)",
"Data Analysis ::: Basic Statistics",
"Data Analysis ::: Speaker Diversity",
"Baseline Results",
"Baseline Results ::: Experimental Settings ::: Data Preprocessing",
"Baseline Results ::: Experimental Settings ::: Model Training",
"Baseline Results ::: Experimental Settings ::: Inference and Evaluation",
"Baseline Results ::: Automatic Speech Recognition (ASR)",
"Baseline Results ::: Machine Translation (MT)",
"Baseline Results ::: Speech Translation (ST)",
"Baseline Results ::: Multi-Speaker Evaluation",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"df078844d25f479b8ead4942ecad620daa926444"
],
"answer": [
{
"evidence": [
"Common Voice BIBREF10 is a crowdsourcing speech recognition corpus with an open CC0 license. Contributors record voice clips by reading from a bank of donated sentences. Each voice clip was validated by at least two other users. Most of the sentences are covered by multiple speakers, with potentially different genders, age groups or accents."
],
"extractive_spans": [],
"free_form_answer": "No specific domain is covered in the corpus.",
"highlighted_evidence": [
"Contributors record voice clips by reading from a bank of donated sentences."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"80f3528a0832c825e313ff87e2788af9eec99c01"
],
"answer": [
{
"evidence": [
"Our ASR and ST models follow the architecture in berard2018end, but have 3 decoder layers like that in pino2019harnessing. For MT, we use a Transformer base architecture BIBREF15, but with 3 encoder layers, 3 decoder layers and 0.3 dropout. We use a batch size of 10,000 frames for ASR and ST, and a batch size of 4,000 tokens for MT. We train all models using Fairseq BIBREF20 for up to 200,000 updates. We use SpecAugment BIBREF21 for ASR and ST to alleviate overfitting."
],
"extractive_spans": [
"follow the architecture in berard2018end, but have 3 decoder layers like that in pino2019harnessing"
],
"free_form_answer": "",
"highlighted_evidence": [
"Our ASR and ST models follow the architecture in berard2018end, but have 3 decoder layers like that in pino2019harnessing."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"907e5c11cdbff5f3425a6fa48ddc39b908bb1467",
"bf76fb9818222fc4d3113c42ccd1845b986e0c75"
],
"answer": [
{
"evidence": [
"Raw CoVo data contains samples that passed validation as well as those that did not. To build CoVoST, we only use the former one and reuse the official train-development-test partition of the validated data. As of January 2020, the latest CoVo 2019-06-12 release includes 29 languages. CoVoST is currently built on that release and covers the following 11 languages: French, German, Dutch, Russian, Spanish, Italian, Turkish, Persian, Swedish, Mongolian and Chinese.",
"Common Voice BIBREF10 is a crowdsourcing speech recognition corpus with an open CC0 license. Contributors record voice clips by reading from a bank of donated sentences. Each voice clip was validated by at least two other users. Most of the sentences are covered by multiple speakers, with potentially different genders, age groups or accents."
],
"extractive_spans": [
"Contributors record voice clips by reading from a bank of donated sentences."
],
"free_form_answer": "",
"highlighted_evidence": [
"As of January 2020, the latest CoVo 2019-06-12 release includes 29 languages. CoVoST is currently built on that release and covers the following 11 languages: French, German, Dutch, Russian, Spanish, Italian, Turkish, Persian, Swedish, Mongolian and Chinese.",
"Common Voice BIBREF10 is a crowdsourcing speech recognition corpus with an open CC0 license. Contributors record voice clips by reading from a bank of donated sentences"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Common Voice BIBREF10 is a crowdsourcing speech recognition corpus with an open CC0 license. Contributors record voice clips by reading from a bank of donated sentences. Each voice clip was validated by at least two other users. Most of the sentences are covered by multiple speakers, with potentially different genders, age groups or accents.",
"Tatoeba (TT) is a community built language learning corpus having sentences aligned across multiple languages with the corresponding speech partially available. Its sentences are on average shorter than those in CoVoST (see also Table TABREF2) given the original purpose of language learning. Sentences in TT are licensed under CC BY 2.0 FR and part of the speeches are available under various CC licenses."
],
"extractive_spans": [
"crowdsourcing"
],
"free_form_answer": "",
"highlighted_evidence": [
"Common Voice BIBREF10 is a crowdsourcing speech recognition corpus with an open CC0 license. ",
"Tatoeba (TT) is a community built language learning corpus having sentences aligned across multiple languages with the corresponding speech partially available."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe"
]
},
{
"annotation_id": [
"9537360eb33d0b6773851807777b5a14a9d0b5db",
"eef21f40b263a41797aaed8abd3f30a79189f9d4"
],
"answer": [
{
"evidence": [
"In this paper, we introduce CoVoST, a multilingual ST corpus based on Common Voice BIBREF10 for 11 languages into English, diversified with over 11,000 speakers and over 60 accents. It includes a total 708 hours of French (Fr), German (De), Dutch (Nl), Russian (Ru), Spanish (Es), Italian (It), Turkish (Tr), Persian (Fa), Swedish (Sv), Mongolian (Mn) and Chinese (Zh) speeches, with French and German ones having the largest durations among existing public corpora. We also collect an additional evaluation corpus from Tatoeba for French, German, Dutch, Russian and Spanish, resulting in a total of 9.3 hours of speech. Both corpora are created at the sentence level and do not require additional alignments or segmentation. Using the official Common Voice train-development-test split, we also provide baseline models, including, to our knowledge, the first end-to-end many-to-one multilingual ST models. CoVoST is released under CC0 license and free to use. The Tatoeba evaluation samples are also available under friendly CC licenses. All the data can be acquired at https://github.com/facebookresearch/covost."
],
"extractive_spans": [
"French (Fr), German (De), Dutch (Nl), Russian (Ru), Spanish (Es), Italian (It), Turkish (Tr), Persian (Fa), Swedish (Sv), Mongolian (Mn) and Chinese (Zh)"
],
"free_form_answer": "",
"highlighted_evidence": [
"It includes a total 708 hours of French (Fr), German (De), Dutch (Nl), Russian (Ru), Spanish (Es), Italian (It), Turkish (Tr), Persian (Fa), Swedish (Sv), Mongolian (Mn) and Chinese (Zh) speeches, with French and German ones having the largest durations among existing public corpora."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Raw CoVo data contains samples that passed validation as well as those that did not. To build CoVoST, we only use the former one and reuse the official train-development-test partition of the validated data. As of January 2020, the latest CoVo 2019-06-12 release includes 29 languages. CoVoST is currently built on that release and covers the following 11 languages: French, German, Dutch, Russian, Spanish, Italian, Turkish, Persian, Swedish, Mongolian and Chinese.",
"We construct an evaluation set from TT (for French, German, Dutch, Russian and Spanish) as a complement to CoVoST development and test sets. We collect (speech, transcript, English translation) triplets for the 5 languages and do not include those whose speech has a broken URL or is not CC licensed. We further filter these samples by sentence lengths (minimum 4 words including punctuations) to reduce the portion of short sentences. This makes the resulting evaluation set closer to real-world scenarios and more challenging."
],
"extractive_spans": [
"French, German, Dutch, Russian, Spanish, Italian, Turkish, Persian, Swedish, Mongolian and Chinese"
],
"free_form_answer": "",
"highlighted_evidence": [
"CoVoST is currently built on that release and covers the following 11 languages: French, German, Dutch, Russian, Spanish, Italian, Turkish, Persian, Swedish, Mongolian and Chinese.",
"We construct an evaluation set from TT (for French, German, Dutch, Russian and Spanish) as a complement to CoVoST development and test sets."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe"
]
},
{
"annotation_id": [
"645889b8157862eec0c1488ba0473a121add1eec",
"cb64593de11fe1cfcc9e999aafec21c0e3d7e70f"
],
"answer": [
{
"evidence": [
"Validated transcripts were sent to professional translators. Note that the translators had access to the transcripts but not the corresponding voice clips since clips would not carry additional information. Since transcripts were duplicated due to multiple speakers, we deduplicated the transcripts before sending them to translators. As a result, different voice clips of the same content (transcript) will have identical translations in CoVoST for train, development and test splits.",
"In order to control the quality of the professional translations, we applied various sanity checks to the translations BIBREF11. 1) For German-English, French-English and Russian-English translations, we computed sentence-level BLEU BIBREF12 with the NLTK BIBREF13 implementation between the human translations and the automatic translations produced by a state-of-the-art system BIBREF14 (the French-English system was a Transformer big BIBREF15 separately trained on WMT14). We applied this method to these three language pairs only as we are confident about the quality of the corresponding systems. Translations with a score that was too low were manually inspected and sent back to the translators when needed. 2) We manually inspected examples where the source transcript was identical to the translation. 3) We measured the perplexity of the translations using a language model trained on a large amount of clean monolingual data BIBREF14. We manually inspected examples where the translation had a high perplexity and sent them back to translators accordingly. 4) We computed the ratio of English characters in the translations. We manually inspected examples with a low ratio and sent them back to translators accordingly. 5) Finally, we used VizSeq BIBREF16 to calculate similarity scores between transcripts and translations based on LASER cross-lingual sentence embeddings BIBREF17. Samples with low scores were manually inspected and sent back for translation when needed.",
"We also sanity check the overlaps of train, development and test sets in terms of transcripts and voice clips (via MD5 file hashing), and confirm they are totally disjoint."
],
"extractive_spans": [
"Validated transcripts were sent to professional translators.",
"various sanity checks to the translations",
" sanity check the overlaps of train, development and test sets"
],
"free_form_answer": "",
"highlighted_evidence": [
"Validated transcripts were sent to professional translators.",
"In order to control the quality of the professional translations, we applied various sanity checks to the translations BIBREF11.",
"We also sanity check the overlaps of train, development and test sets in terms of transcripts and voice clips (via MD5 file hashing), and confirm they are totally disjoint."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In order to control the quality of the professional translations, we applied various sanity checks to the translations BIBREF11. 1) For German-English, French-English and Russian-English translations, we computed sentence-level BLEU BIBREF12 with the NLTK BIBREF13 implementation between the human translations and the automatic translations produced by a state-of-the-art system BIBREF14 (the French-English system was a Transformer big BIBREF15 separately trained on WMT14). We applied this method to these three language pairs only as we are confident about the quality of the corresponding systems. Translations with a score that was too low were manually inspected and sent back to the translators when needed. 2) We manually inspected examples where the source transcript was identical to the translation. 3) We measured the perplexity of the translations using a language model trained on a large amount of clean monolingual data BIBREF14. We manually inspected examples where the translation had a high perplexity and sent them back to translators accordingly. 4) We computed the ratio of English characters in the translations. We manually inspected examples with a low ratio and sent them back to translators accordingly. 5) Finally, we used VizSeq BIBREF16 to calculate similarity scores between transcripts and translations based on LASER cross-lingual sentence embeddings BIBREF17. Samples with low scores were manually inspected and sent back for translation when needed."
],
"extractive_spans": [
"computed sentence-level BLEU",
"We manually inspected examples where the source transcript was identical to the translation",
"measured the perplexity of the translations",
"computed the ratio of English characters in the translations",
"calculate similarity scores between transcripts and translations"
],
"free_form_answer": "",
"highlighted_evidence": [
"In order to control the quality of the professional translations, we applied various sanity checks to the translations BIBREF11. 1) For German-English, French-English and Russian-English translations, we computed sentence-level BLEU BIBREF1",
"In order to control the quality of the professional translations, we applied various sanity checks to the translations BIBREF11. 1) For German-English, French-English and Russian-English translations, we computed sentence-level BLEU BIBREF12 with the NLTK BIBREF13 implementation between the human translations and the automatic translations produced by a state-of-the-art system BIBREF14 (the French-English system was a Transformer big BIBREF15 separately trained on WMT14).",
"2) We manually inspected examples where the source transcript was identical to the translation",
"3) We measured the perplexity of the translations using a language model trained on a large amount of clean monolingual data BIBREF14.",
"4) We computed the ratio of English characters in the translations.",
"5) Finally, we used VizSeq BIBREF16 to calculate similarity scores between transcripts and translations based on LASER cross-lingual sentence embeddings BIBREF17."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe"
]
},
{
"annotation_id": [
"1a8a860767e7516ddfa42c262aaf0f8b57282e00"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"60b8c3954a0e32ac92d9cac8d9a898f5cbc9d5b6",
"f4d999ab855b6a79863a929685e622eb083ef46e"
],
"answer": [
{
"evidence": [
"Raw CoVo data contains samples that passed validation as well as those that did not. To build CoVoST, we only use the former one and reuse the official train-development-test partition of the validated data. As of January 2020, the latest CoVo 2019-06-12 release includes 29 languages. CoVoST is currently built on that release and covers the following 11 languages: French, German, Dutch, Russian, Spanish, Italian, Turkish, Persian, Swedish, Mongolian and Chinese."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"CoVoST is currently built on that release and covers the following 11 languages: French, German, Dutch, Russian, Spanish, Italian, Turkish, Persian, Swedish, Mongolian and Chinese."
],
"unanswerable": false,
"yes_no": false
},
{
"evidence": [
"Raw CoVo data contains samples that passed validation as well as those that did not. To build CoVoST, we only use the former one and reuse the official train-development-test partition of the validated data. As of January 2020, the latest CoVo 2019-06-12 release includes 29 languages. CoVoST is currently built on that release and covers the following 11 languages: French, German, Dutch, Russian, Spanish, Italian, Turkish, Persian, Swedish, Mongolian and Chinese."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"CoVoST is currently built on that release and covers the following 11 languages: French, German, Dutch, Russian, Spanish, Italian, Turkish, Persian, Swedish, Mongolian and Chinese."
],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"0792ac6dc4d73f7154bf6ab7ac606695da31789c",
"26ab084c437b086d3d733820be922c6e35e23f3b"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe"
]
}
],
"nlp_background": [
"two",
"two",
"two",
"two",
"infinity",
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no",
"no",
"no",
"no"
],
"question": [
"What domains are covered in the corpus?",
"What is the architecture of their model?",
"How was the dataset collected?",
"Which languages are part of the corpus?",
"How is the quality of the data empirically evaluated? ",
"Is the data in CoVoST annotated for dialect?",
"Is Arabic one of the 11 languages in CoVost?",
"How big is Augmented LibriSpeech dataset?"
],
"question_id": [
"6a219d7c58451842aa5d6819a7cdf51c55e9fc0f",
"cee8cfaf26e49d98e7d34fa1b414f8f31d6502ad",
"f8f4e4a50d2b3fbd193327e79ea32d8d057e1414",
"bc84c5a58c57038910f7720d7a784560054d3e1a",
"29923a824c98b3ba85ced964a0e6a2af35758abe",
"559c68802ee2bb8b11e2188127418ca3a6155ba7",
"8dc707a0daf7bff61a97d9d854283e65c0c85064",
"ffde866b1203a01580eb33237a0bb9da71c75ecf"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"fa716cd87ce6fd6905e2f23f09b262e90413167f",
"fa716cd87ce6fd6905e2f23f09b262e90413167f",
"fa716cd87ce6fd6905e2f23f09b262e90413167f",
"fa716cd87ce6fd6905e2f23f09b262e90413167f"
],
"search_query": [
"",
"",
"",
"",
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"familiar",
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Table 1: Basic statistics of CoVoST and TT evaluation set. Token statistics are based on Moses-tokenized sentences. Speaker demographics is partially available.",
"Figure 1: CoVoST transcript distribution by number of speakers.",
"Figure 2: CoVoST transcript distribution by number of speaker accents.",
"Table 2: TT-CoVo transcript overlapping rate.",
"Figure 3: CoVoST transcript distribution by speaker age groups.",
"Table 3: WER and CER scores for ASR models.",
"Table 6: BLEU scores for end-to-end ST models (continuation of Table 5).",
"Table 4: BLEU scores for MT models.",
"Table 7: Average per-group mean and average coefficient of variation for ST sentence BLEU scores on CoVoST test set (groups correspond to one transcript and multiple speakers). The latter is unavailable for Swedish, Mongolian and Chinese because models are unable to acheive non-zero scores on multi-speaker samples."
],
"file": [
"2-Table1-1.png",
"3-Figure1-1.png",
"3-Figure2-1.png",
"3-Table2-1.png",
"4-Figure3-1.png",
"4-Table3-1.png",
"5-Table6-1.png",
"5-Table4-1.png",
"5-Table7-1.png"
]
} | [
"What domains are covered in the corpus?"
] | [
[
"2002.01320-Data Collection and Processing ::: Common Voice (CoVo)-0"
]
] | [
"No specific domain is covered in the corpus."
] | 207 |
1605.07333 | Combining Recurrent and Convolutional Neural Networks for Relation Classification | This paper investigates two different neural architectures for the task of relation classification: convolutional neural networks and recurrent neural networks. For both models, we demonstrate the effect of different architectural choices. We present a new context representation for convolutional neural networks for relation classification (extended middle context). Furthermore, we propose connectionist bi-directional recurrent neural networks and introduce ranking loss for their optimization. Finally, we show that combining convolutional and recurrent neural networks using a simple voting scheme is accurate enough to improve results. Our neural models achieve state-of-the-art results on the SemEval 2010 relation classification task. | {
"paragraphs": [
[
"Relation classification is the task of assigning sentences with two marked entities to a predefined set of relations. The sentence “We poured the <e1>milk</e1> into the <e2>pumpkin mixture</e2>.”, for example, expresses the relation Entity-Destination(e1,e2). While early research mostly focused on support vector machines or maximum entropy classifiers BIBREF0 , BIBREF1 , recent research showed performance improvements by applying neural networks (NNs) BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 on the benchmark data from SemEval 2010 shared task 8 BIBREF8 .",
"This study investigates two different types of NNs: recurrent neural networks (RNNs) and convolutional neural networks (CNNs) as well as their combination. We make the following contributions:",
"(1) We propose extended middle context, a new context representation for CNNs for relation classification. The extended middle context uses all parts of the sentence (the relation arguments, left of the relation arguments, between the arguments, right of the arguments) and pays special attention to the middle part.",
"(2) We present connectionist bi-directional RNN models which are especially suited for sentence classification tasks since they combine all intermediate hidden layers for their final decision. Furthermore, the ranking loss function is introduced for the RNN model optimization which has not been investigated in the literature for relation classification before.",
"(3) Finally, we combine CNNs and RNNs using a simple voting scheme and achieve new state-of-the-art results on the SemEval 2010 benchmark dataset."
],
[
"In 2010, manually annotated data for relation classification was released in the context of a SemEval shared task BIBREF8 . Shared task participants used, i.a., support vector machines or maximum entropy classifiers BIBREF0 , BIBREF1 . Recently, their results on this data set were outperformed by applying NNs BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 .",
"zeng2014 built a CNN based only on the context between the relation arguments and extended it with several lexical features. kim2014 and others used convolutional filters of different sizes for CNNs. nguyen applied this to relation classification and obtained improvements over single filter sizes. deSantos2015 replaced the softmax layer of the CNN with a ranking layer. They showed improvements and published the best result so far on the SemEval dataset, to our knowledge.",
"socher used another NN architecture for relation classification: recursive neural networks that built recursive sentence representations based on syntactic parsing. In contrast, zhang investigated a temporal structured RNN with only words as input. They used a bi-directional model with a pooling layer on top."
],
[
"CNNs perform a discrete convolution on an input matrix with a set of different filters. For NLP tasks, the input matrix represents a sentence: Each column of the matrix stores the word embedding of the corresponding word. By applying a filter with a width of, e.g., three columns, three neighboring words (trigram) are convolved. Afterwards, the results of the convolution are pooled. Following collobertWeston, we perform max-pooling which extracts the maximum value for each filter and, thus, the most informative n-gram for the following steps. Finally, the resulting values are concatenated and used for classifying the relation expressed in the sentence."
],
[
"One of our contributions is a new input representation especially designed for relation classification. The contexts are split into three disjoint regions based on the two relation arguments: the left context, the middle context and the right context. Since in most cases the middle context contains the most relevant information for the relation, we want to focus on it but not ignore the other regions completely. Hence, we propose to use two contexts: (1) a combination of the left context, the left entity and the middle context; and (2) a combination of the middle context, the right entity and the right context. Due to the repetition of the middle context, we force the network to pay special attention to it. The two contexts are processed by two independent convolutional and max-pooling layers. After pooling, the results are concatenated to form the sentence representation. Figure FIGREF3 depicts this procedure. It shows an examplary sentence: “He had chest pain and <e1>headaches</e1> from <e2>mold</e2> in the bedroom.” If we only considered the middle context “from”, the network might be tempted to predict a relation like Entity-Origin(e1,e2). However, by also taking the left and right context into account, the model can detect the relation Cause-Effect(e2,e1). While this could also be achieved by integrating the whole context into the model, using the whole context can have disadvantages for longer sentences: The max pooling step can easily choose a value from a part of the sentence which is far away from the mention of the relation. With splitting the context into two parts, we reduce this danger. Repeating the middle context increases the chance for the max pooling step to pick a value from the middle context."
],
[
"Following previous work (e.g., BIBREF5 , BIBREF6 ), we use 2D filters spanning all embedding dimensions. After convolution, a max pooling operation is applied that stores only the highest activation of each filter. We apply filters with different window sizes 2-5 (multi-windows) as in BIBREF5 , i.e. spanning a different number of input words."
],
[
"Traditional RNNs consist of an input vector, a history vector and an output vector. Based on the representation of the current input word and the previous history vector, a new history is computed. Then, an output is predicted (e.g., using a softmax layer). In contrast to most traditional RNN architectures, we use the RNN for sentence modeling, i.e., we predict an output vector only after processing the whole sentence and not after each word. Training is performed using backpropagation through time BIBREF9 which unfolds the recurrent computations of the history vector for a certain number of time steps. To avoid exploding gradients, we use gradient clipping with a threshold of 10 BIBREF10 ."
],
[
"Initial experiments showed that using trigrams as input instead of single words led to superior results. Hence, at timestep INLINEFORM0 we do not only give word INLINEFORM1 to the model but the trigram INLINEFORM2 by concatenating the corresponding word embeddings."
],
[
"Especially for relation classification, the processing of the relation arguments might be easier with knowledge of the succeeding words. Therefore in bi-directional RNNs, not only a history vector of word INLINEFORM0 is regarded but also a future vector. This leads to the following conditioned probability for the history INLINEFORM1 at time step INLINEFORM2 : DISPLAYFORM0 ",
"Thus, the network can be split into three parts: a forward pass which processes the original sentence word by word (Equation EQREF6 ); a backward pass which processes the reversed sentence word by word (Equation ); and a combination of both (Equation ). All three parts are trained jointly. This is also depicted in Figure FIGREF7 .",
"Combining forward and backward pass by adding their hidden layer is similar to BIBREF7 . We, however, also add a connection to the previous combined hidden layer with weight INLINEFORM0 to be able to include all intermediate hidden layers into the final decision of the network (see Equation ). We call this “connectionist bi-directional RNN”.",
"In our experiments, we compare this RNN with uni-directional RNNs and bi-directional RNNs without additional hidden layer connections."
],
[
"Words are represented by concatenated vectors: a word embedding and a position feature vector.",
"Pretrained word embeddings. In this study, we used the word2vec toolkit BIBREF11 to train embeddings on an English Wikipedia from May 2014. We only considered words appearing more than 100 times and added a special PADDING token for convolution. This results in an embedding training text of about 485,000 terms and INLINEFORM0 tokens. During model training, the embeddings are updated.",
"Position features. We incorporate randomly initialized position embeddings similar to zeng2014, nguyen and deSantos2015. In our RNN experiments, we investigate different possibilities of integrating position information: position embeddings, position embeddings with entity presence flags (flags indicating whether the current word is one of the relation arguments), and position indicators BIBREF7 ."
],
[
"Ranking. We applied the ranking loss function proposed in deSantos2015 to train our models. It maximizes the distance between the true label INLINEFORM0 and the best competitive label INLINEFORM1 given a data point INLINEFORM2 . The objective function is DISPLAYFORM0 ",
"with INLINEFORM0 and INLINEFORM1 being the scores for the classes INLINEFORM2 and INLINEFORM3 respectively. The parameter INLINEFORM4 controls the penalization of the prediction errors and INLINEFORM5 and INLINEFORM6 are margins for the correct and incorrect classes. Following deSantos2015, we set INLINEFORM7 . We do not learn a pattern for the class Other but increase its difference to the best competitive label by using only the second summand in Equation EQREF10 during training."
],
[
"We used the relation classification dataset of the SemEval 2010 task 8 BIBREF8 . It consists of sentences which have been manually labeled with 19 relations (9 directed relations and one artificial class Other). 8,000 sentences have been distributed as training set and 2,717 sentences served as test set. For evaluation, we applied the official scoring script and report the macro F1 score which also served as the official result of the shared task.",
"RNN and CNN models were implemented with theano BIBREF12 , BIBREF13 . For all our models, we use L2 regularization with a weight of 0.0001. For CNN training, we use mini batches of 25 training examples while we perform stochastic gradient descent for the RNN. The initial learning rates are 0.2 for the CNN and 0.01 for the RNN. We train the models for 10 (CNN) and 50 (RNN) epochs without early stopping. As activation function, we apply tanh for the CNN and capped ReLU for the RNN. For tuning the hyperparameters, we split the training data into two parts: 6.5k (training) and 1.5k (development) sentences. We also tuned the learning rate schedule on dev.",
"Beside of training single models, we also report ensemble results for which we combined the presented single models with a voting process."
],
[
"As a baseline system, we implemented a CNN similar to the one described by zeng2014. It consists of a standard convolutional layer with filters with only one window size, followed by a softmax layer. As input it uses the middle context. In contrast to zeng2014, our CNN does not have an additional fully connected hidden layer. Therefore, we increased the number of convolutional filters to 1200 to keep the number of parameters comparable. With this, we obtain a baseline result of 73.0. After including 5 dimensional position features, the performance was improved to 78.6 (comparable to 78.9 as reported by zeng2014 without linguistic features).",
"In the next step, we investigate how this result changes if we successively add further features to our CNN: multi-windows for convolution (window sizes: 2,3,4,5 and 300 feature maps each), ranking layer instead of softmax and our proposed extended middle context. Table TABREF12 shows the results. Note that all numbers are produced by CNNs with a comparable number of parameters. We also report F1 for increasing the word embedding dimensionality from 50 to 400. The position embedding dimensionality is 5 in combination with 50 dimensional word embeddings and 35 with 400 dimensional word embeddings. Our results show that especially the ranking layer and the embedding size have an important impact on the performance."
],
[
"As a baseline for the RNN models, we apply a uni-directional RNN which predicts the relation after processing the whole sentence. With this model, we achieve an F1 score of 61.2 on the SemEval test set.",
"Afterwards, we investigate the impact of different position features on the performance of uni-directional RNNs (position embeddings, position embeddings concatenated with a flag indicating whether the current word is an entity or not, and position indicators BIBREF7 ). The results indicate that position indicators (i.e. artificial words that indicate the entity presence) perform the best on the SemEval data. We achieve an F1 score of 73.4 with them. However, the difference to using position embeddings with entity flags is not statistically significant.",
"Similar to our CNN experiments, we successively vary the RNN models by using bi-directionality, by adding connections between the hidden layers (“connectionist”), by applying ranking instead of softmax to predict the relation and by increasing the word embedding dimension to 400.",
"The results in Table TABREF14 show that all of these variations lead to statistically significant improvements. Especially the additional hidden layer connections and the integration of the ranking layer have a large impact on the performance."
],
[
"Finally, we combine our CNN and RNN models using a voting process. For each sentence in the test set, we apply several CNN and RNN models presented in Tables TABREF12 and TABREF14 and predict the class with the most votes. In case of a tie, we pick one of the most frequent classes randomly. The combination achieves an F1 score of 84.9 which is better than the performance of the two NN types alone. It, thus, confirms our assumption that the networks provide complementary information: while the RNN computes a weighted combination of all words in the sentence, the CNN extracts the most informative n-grams for the relation and only considers their resulting activations."
],
[
"Table TABREF16 shows the results of our models ER-CNN (extended ranking CNN) and R-RNN (ranking RNN) in the context of other state-of-the-art models. Our proposed models obtain state-of-the-art results on the SemEval 2010 task 8 data set without making use of any linguistic features."
],
[
"In this paper, we investigated different features and architectural choices for convolutional and recurrent neural networks for relation classification without using any linguistic features. For convolutional neural networks, we presented a new context representation for relation classification. Furthermore, we introduced connectionist recurrent neural networks for sentence classification tasks and performed the first experiments with ranking recurrent neural networks. Finally, we showed that even a simple combination of convolutional and recurrent neural networks improved results. With our neural models, we achieved new state-of-the-art results on the SemEval 2010 task 8 benchmark data."
],
[
"Heike Adel is a recipient of the Google European Doctoral Fellowship in Natural Language Processing and this research is supported by this fellowship.",
"This research was also supported by Deutsche Forschungsgemeinschaft: grant SCHU 2246/4-2."
]
],
"section_name": [
"Introduction",
"Related Work",
"Convolutional Neural Networks (CNN)",
"Input: Extended Middle Context",
"Convolutional Layer",
"Recurrent Neural Networks (RNN)",
"Input of the RNNs",
"Connectionist Bi-directional RNNs",
"Word Representations",
"Objective Function: Ranking Loss",
"Experiments and Results",
"Performance of CNNs",
"Performance of RNNs",
"Combination of CNNs and RNNs",
"Comparison with State of the Art",
"Conclusion",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"0602f1527dfd6d42f276fcc3ceafa5ba3e5d9752",
"0c1b4c11400ed5398a9fb65980772a957937696d"
],
"answer": [
{
"evidence": [
"Table TABREF16 shows the results of our models ER-CNN (extended ranking CNN) and R-RNN (ranking RNN) in the context of other state-of-the-art models. Our proposed models obtain state-of-the-art results on the SemEval 2010 task 8 data set without making use of any linguistic features.",
"FLOAT SELECTED: Table 3: State-of-the-art results for relation classification"
],
"extractive_spans": [],
"free_form_answer": "0.8% F1 better than the best state-of-the-art",
"highlighted_evidence": [
"Table TABREF16 shows the results of our models ER-CNN (extended ranking CNN) and R-RNN (ranking RNN) in the context of other state-of-the-art models.",
"FLOAT SELECTED: Table 3: State-of-the-art results for relation classification"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Table TABREF16 shows the results of our models ER-CNN (extended ranking CNN) and R-RNN (ranking RNN) in the context of other state-of-the-art models. Our proposed models obtain state-of-the-art results on the SemEval 2010 task 8 data set without making use of any linguistic features.",
"FLOAT SELECTED: Table 3: State-of-the-art results for relation classification"
],
"extractive_spans": [],
"free_form_answer": "Best proposed model achieves F1 score of 84.9 compared to best previous result of 84.1.",
"highlighted_evidence": [
"Table TABREF16 shows the results of our models ER-CNN (extended ranking CNN) and R-RNN (ranking RNN) in the context of other state-of-the-art models. Our proposed models obtain state-of-the-art results on the SemEval 2010 task 8 data set without making use of any linguistic features.",
"FLOAT SELECTED: Table 3: State-of-the-art results for relation classification"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"7ef12d9ce739d01a4fd9e4507ab0860b9a5f6da9",
"aab28a7eb6042b34fd2c0d6c6c51a386d06d930f"
],
"answer": [
{
"evidence": [
"We used the relation classification dataset of the SemEval 2010 task 8 BIBREF8 . It consists of sentences which have been manually labeled with 19 relations (9 directed relations and one artificial class Other). 8,000 sentences have been distributed as training set and 2,717 sentences served as test set. For evaluation, we applied the official scoring script and report the macro F1 score which also served as the official result of the shared task."
],
"extractive_spans": [
"relation classification dataset of the SemEval 2010 task 8"
],
"free_form_answer": "",
"highlighted_evidence": [
"We used the relation classification dataset of the SemEval 2010 task 8 BIBREF8 . It consists of sentences which have been manually labeled with 19 relations (9 directed relations and one artificial class Other). 8,000 sentences have been distributed as training set and 2,717 sentences served as test set."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We used the relation classification dataset of the SemEval 2010 task 8 BIBREF8 . It consists of sentences which have been manually labeled with 19 relations (9 directed relations and one artificial class Other). 8,000 sentences have been distributed as training set and 2,717 sentences served as test set. For evaluation, we applied the official scoring script and report the macro F1 score which also served as the official result of the shared task."
],
"extractive_spans": [
"SemEval 2010 task 8 BIBREF8"
],
"free_form_answer": "",
"highlighted_evidence": [
"We used the relation classification dataset of the SemEval 2010 task 8 BIBREF8 ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"4ef4720eebb093453ee142f01667bdef8911c3e5",
"8801a91cf6ca24a4df75ea64ad39d9e4eb2f8087"
],
"answer": [
{
"evidence": [
"Finally, we combine our CNN and RNN models using a voting process. For each sentence in the test set, we apply several CNN and RNN models presented in Tables TABREF12 and TABREF14 and predict the class with the most votes. In case of a tie, we pick one of the most frequent classes randomly. The combination achieves an F1 score of 84.9 which is better than the performance of the two NN types alone. It, thus, confirms our assumption that the networks provide complementary information: while the RNN computes a weighted combination of all words in the sentence, the CNN extracts the most informative n-grams for the relation and only considers their resulting activations."
],
"extractive_spans": [
"we apply several CNN and RNN models presented in Tables TABREF12 and TABREF14 and predict the class with the most votes",
"In case of a tie, we pick one of the most frequent classes randomly"
],
"free_form_answer": "",
"highlighted_evidence": [
"Finally, we combine our CNN and RNN models using a voting process. For each sentence in the test set, we apply several CNN and RNN models presented in Tables TABREF12 and TABREF14 and predict the class with the most votes. In case of a tie, we pick one of the most frequent classes randomly."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Finally, we combine our CNN and RNN models using a voting process. For each sentence in the test set, we apply several CNN and RNN models presented in Tables TABREF12 and TABREF14 and predict the class with the most votes. In case of a tie, we pick one of the most frequent classes randomly. The combination achieves an F1 score of 84.9 which is better than the performance of the two NN types alone. It, thus, confirms our assumption that the networks provide complementary information: while the RNN computes a weighted combination of all words in the sentence, the CNN extracts the most informative n-grams for the relation and only considers their resulting activations."
],
"extractive_spans": [],
"free_form_answer": "Among all the classes predicted by several models, for each test sentence, class with most votes are picked. In case of a tie, one of the most frequent classes are picked randomly.",
"highlighted_evidence": [
"Finally, we combine our CNN and RNN models using a voting process. For each sentence in the test set, we apply several CNN and RNN models presented in Tables TABREF12 and TABREF14 and predict the class with the most votes. In case of a tie, we pick one of the most frequent classes randomly."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"d82b94523832b7e32b8473a591fcac2b6b5d7b94"
],
"answer": [
{
"evidence": [
"As a baseline for the RNN models, we apply a uni-directional RNN which predicts the relation after processing the whole sentence. With this model, we achieve an F1 score of 61.2 on the SemEval test set."
],
"extractive_spans": [
"uni-directional RNN"
],
"free_form_answer": "",
"highlighted_evidence": [
"As a baseline for the RNN models, we apply a uni-directional RNN which predicts the relation after processing the whole sentence."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"cd8c64f6a8289f4678de3f7a71181b13221af9dc"
],
"answer": [
{
"evidence": [
"(1) We propose extended middle context, a new context representation for CNNs for relation classification. The extended middle context uses all parts of the sentence (the relation arguments, left of the relation arguments, between the arguments, right of the arguments) and pays special attention to the middle part.",
"One of our contributions is a new input representation especially designed for relation classification. The contexts are split into three disjoint regions based on the two relation arguments: the left context, the middle context and the right context. Since in most cases the middle context contains the most relevant information for the relation, we want to focus on it but not ignore the other regions completely. Hence, we propose to use two contexts: (1) a combination of the left context, the left entity and the middle context; and (2) a combination of the middle context, the right entity and the right context. Due to the repetition of the middle context, we force the network to pay special attention to it. The two contexts are processed by two independent convolutional and max-pooling layers. After pooling, the results are concatenated to form the sentence representation. Figure FIGREF3 depicts this procedure. It shows an examplary sentence: “He had chest pain and <e1>headaches</e1> from <e2>mold</e2> in the bedroom.” If we only considered the middle context “from”, the network might be tempted to predict a relation like Entity-Origin(e1,e2). However, by also taking the left and right context into account, the model can detect the relation Cause-Effect(e2,e1). While this could also be achieved by integrating the whole context into the model, using the whole context can have disadvantages for longer sentences: The max pooling step can easily choose a value from a part of the sentence which is far away from the mention of the relation. With splitting the context into two parts, we reduce this danger. Repeating the middle context increases the chance for the max pooling step to pick a value from the middle context."
],
"extractive_spans": [],
"free_form_answer": "They use two independent convolutional and max-pooling layers on (1) a combination of the left context, the left entity and the middle context; and (2) a combination of the middle context, the right entity and the right context. They concatenated the two results after pooling to get the new context representation.",
"highlighted_evidence": [
"We propose extended middle context, a new context representation for CNNs for relation classification.",
"The contexts are split into three disjoint regions based on the two relation arguments: the left context, the middle context and the right context. ",
"Hence, we propose to use two contexts: (1) a combination of the left context, the left entity and the middle context; and (2) a combination of the middle context, the right entity and the right context.",
"The two contexts are processed by two independent convolutional and max-pooling layers. After pooling, the results are concatenated to form the sentence representation."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no"
],
"question": [
"By how much does their best model outperform the state-of-the-art?",
"Which dataset do they train their models on?",
"How does their simple voting scheme work?",
"Which variant of the recurrent neural network do they use?",
"How do they obtain the new context represetation?"
],
"question_id": [
"6cd8bad8a031ce6d802ded90f9754088e0c8d653",
"30eacb4595014c9c0e5ee9669103d003cfdfe1e5",
"0f7867f888109b9e000ef68965df4dde2511a55f",
"e2e977d7222654ee8d983fd8ba63b930e9a5a691",
"0cfe0e33fbb100751fc0916001a5a19498ae8cb5"
],
"question_writer": [
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
],
"search_query": [
"",
"",
"",
"",
""
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 1: CNN with extended contexts",
"Figure 2: Connectionist bi-directional RNN",
"Table 1: F1 score of CNN and its components, * indicates statisticial significance compared to the result in the line above (z-test, p < 0.05)",
"Table 2: F1 score of RNN and its components, * indicates statisticial significance compared to the result in the line above (z-test, p < 0.05)",
"Table 3: State-of-the-art results for relation classification"
],
"file": [
"3-Figure1-1.png",
"3-Figure2-1.png",
"4-Table1-1.png",
"5-Table2-1.png",
"5-Table3-1.png"
]
} | [
"By how much does their best model outperform the state-of-the-art?",
"How does their simple voting scheme work?",
"How do they obtain the new context represetation?"
] | [
[
"1605.07333-Comparison with State of the Art-0",
"1605.07333-5-Table3-1.png"
],
[
"1605.07333-Combination of CNNs and RNNs-0"
],
[
"1605.07333-Introduction-2",
"1605.07333-Input: Extended Middle Context-0"
]
] | [
"Best proposed model achieves F1 score of 84.9 compared to best previous result of 84.1.",
"Among all the classes predicted by several models, for each test sentence, class with most votes are picked. In case of a tie, one of the most frequent classes are picked randomly.",
"They use two independent convolutional and max-pooling layers on (1) a combination of the left context, the left entity and the middle context; and (2) a combination of the middle context, the right entity and the right context. They concatenated the two results after pooling to get the new context representation."
] | 208 |
2003.08385 | X-Stance: A Multilingual Multi-Target Dataset for Stance Detection | We extract a large-scale stance detection dataset from comments written by candidates of elections in Switzerland. The dataset consists of German, French and Italian text, allowing for a cross-lingual evaluation of stance detection. It contains 67 000 comments on more than 150 political issues (targets). Unlike stance detection models that have specific target issues, we use the dataset to train a single model on all the issues. To make learning across targets possible, we prepend to each instance a natural question that represents the target (e.g."Do you support X?"). Baseline results from multilingual BERT show that zero-shot cross-lingual and cross-target transfer of stance detection is moderately successful with this approach. | {
"paragraphs": [
[
"In recent years many datasets have been created for the task of automated stance detection, advancing natural language understanding systems for political science, opinion research and other application areas. Typically, such benchmarks BIBREF0 are composed of short pieces of text commenting on politicians or public issues and are manually annotated with their stance towards a target entity (e.g. Climate Change, or Trump). However, they are limited in scope on multiple levels BIBREF1.",
"First of all, it is questionable how well current stance detection methods perform in a cross-lingual setting, as the multilingual datasets available today are relatively small, and specific to a single target BIBREF2, BIBREF3. Furthermore, specific models tend to be developed for each single target or pair of targets BIBREF4. Concerns have been raised that cross-target performance is often considerably lower than fully supervised performance BIBREF1.",
"In this paper we propose a much larger dataset that combines multilinguality and a multitude of topics and targets. x-stance comprises more than 150 questions concerning Swiss politics and more than 67k answers given in the last decade by candidates running for political office in Switzerland.",
"Questions are available in four languages: English, Swiss Standard German, French, and Italian. The language of a comment depends on the candidate's region of origin.",
"We have extracted the data from the voting advice application Smartvote. On that platform, candidates respond to questions mainly in categorical form (yes / rather yes / rather no / no). They can also submit a free-text comment in order to justify, explain or differentiate their categorical answer. An example is given in Figure FIGREF1.",
"We transform the dataset into a stance detection task by interpreting the question as a natural-language representation of the target, and the commentary as the input to be classified.",
"The dataset is split into a multilingual training set and into multiple test sets to evaluate zero-shot cross-lingual and cross-target transfer. To provide a baseline, we fine-tune a multilingual Bert model BIBREF5 on x-stance. We show that the baseline accuracy is comparable to previous stance detection benchmarks while leaving ample room for improvement. In addition, multilingual Bert can generalize to a degree both cross-lingually and in a cross-target setting.",
"We have made the dataset and the code for reproducing the baseline model publicly available."
],
[
"In the context of the IberEval shared tasks, two related multilingual datasets have been created BIBREF2, BIBREF3. Both are a collection of annotated Spanish and Catalan tweets. Crucially, the tweets in both languages focus on the same issue (Catalan independence); given this fact they are the first truly multilingual stance detection datasets known to us.",
"With regard to the languages covered by x-stance, only monolingual datasets seem to be available. For French, a collection of tweets on French presidential candidates has been annotated with stance BIBREF6. Similarly, two datasets of Italian tweets on the occasion of the 2016 constitutional referendum have been created BIBREF7, BIBREF6. For German, there is no known stance detection dataset, but an aspect-based sentiment dataset has been created for a GermEval 2017 shared task BIBREF8."
],
[
"The SemEval-2016 task on detecting stance in tweets BIBREF9 offers data concerning multiple targets (Atheism, Climate Change, Feminism, Hillary Clinton, and Abortion). In the supervised subtask A, participants tended to develop a target-specific model for each of those targets. In subtask B cross-target transfer to the target “Donald Trump” was tested, for which no annotated training data were provided. While this required the development of more universal models, their performance was generally much lower.",
"BIBREF4 introduced a multi-target stance dataset which provides two targets per instance. For example, a model designed in this framework is supposed to simultaneously classify a tweet with regard to Clinton and with regard to Trump. While in theory the framework allows for more than two targets, it is still restricted to a finite and clearly defined set of targets. It focuses on modeling the dependencies of multiple targets within the same text sample, while our approach focuses on learning stance detection from many samples with many different targets."
],
[
"In a target-specific setting, BIBREF10 perform a systematic evaluation of stance detection approaches. They also evaluate Bert BIBREF5 and find that it consistently outperforms previous approaches.",
"However, they only experimented with a single-segment encoding of the input, preventing cross-target transfer of the model. BIBREF11 propose a conditional encoding approach to encode both the target and the tweet as sequences. They use a bidirectional LSTM to condition the encoding of the tweets on the encoding of the target, and then apply a nonlinear projection on the conditionally encoded tweet. This allows them to train a model that can generalize to previously unseen targets."
],
[
"The input provided by x-stance is two-fold: (A) a natural language question concerning a political issue; (B) a natural language commentary on a specific stance towards the question.",
"The label to be predicted is either `favor' or `against`. This corresponds to a standard established by BIBREF0. However, x-stance differs from that dataset in that it lacks a `neither' class; all comments refer to either a `favor' or an `against` position. The task posed by x-stance is thus a binary classification task.",
"As an evaluation metric we report the macro-average of the F1-score for `favor' and the F1-score for `against', similar to BIBREF9. We use this metric mainly to strengthen comparability with the previous benchmarks."
],
[
"We downloaded questions and answers via the Smartvote API. The downloaded data cover 175 communal, cantonal and national elections between 2011 and 2020.",
"All candidates in an election who participate in Smartvote are asked the same set of questions, but depending on the locale they see translated versions of the questions. They can answer each question with either `yes', `rather yes', `rather no', or `no'. They can supplement each answer with a comment of at most 500 characters.",
"The questions asked on Smartvote have been edited by a team of political scientists. They are intended to cover a broad range of political issues relevant at the time of the election. A detailed documentation of the design of Smartvote and the editing process of the questions is provided by BIBREF12."
],
[
"We merged the two labels on each pole into a single label: `yes' and `rather yes' were combined into `favor'; `rather no', or `no' into `against`. This improves the consistency of the data and the comparability to previous stance detection datasets.",
"We did not further preprocess the text of the comments."
],
[
"As the API does not provide the language of comments, we employed a language identifier to automatically annotate this information. We used the langdetect library BIBREF13. For each responder we classified all the comments jointly, assuming that responders did not switch code during the answering of the questionnaire.",
"We applied the identifier in a two-step approach. In the first run we allowed the identifier to output all 55 languages that it supports out of the box, plus Romansh, the fourth official language in Switzerland. We found that no Romansh comments were detected and that all unexpected outputs were misclassifications of German, French or Italian comments. We further concluded that little or no Swiss German comments are in the dataset: If they were, some of them would have manifested themselves in the form of misclassifications (e.g. as Dutch).",
"In the second run, drawing from these conclusions, we restricted the identifier's output to English, French, German and Italian."
],
[
"We pre-filtered the questions and answers to improve the quality of the dataset. To keep the domain of the data surveyable, we set a focus on national-level questions. Therefore, all questions and corresponding answers pertaining to national elections were included.",
"In the context of communal and cantonal elections, candidates have answered both local questions and a subset of the national questions. Of those elections, we only considered answers to the questions that also had been asked in a national election. Furthermore, they were only used to augment the training set while the validation and test sets were restricted to answers from national elections.",
"We discarded the less than 20 comments classified as English. Furthermore, instances that met any of the following conditions were filtered from the dataset:",
"Question is not a closed question or does not address a clearly defined political issue.",
"No comment was submitted by the candidate or the comment is shorter than 50 characters.",
"Comment starts with “but” or a similar indicator that the comment is not a self-contained statement.",
"Comment contains a URL.",
"In total, a fifth of the original comments were filtered out."
],
[
"The questions have been organized by the Smartvote editors into categories (such as “Economy”). We further consolidated the pre-defined categories into 12 broad topics (Table TABREF7)."
],
[
"The dataset is shared under a CC BY-NC 4.0 license. Copyright remains with www.smartvote.ch.",
"Given the sensitive nature of the data, we increase the anonymity of the data by hashing the respondents' IDs. No personal attributes of the respondents, such as their party affiliation, are included in the dataset. We provide a data statement BIBREF15 in Appendix SECREF8."
],
[
"We held out the topics “Healthcare” and “Political System” from the training data and created a separate cross-topic test set that contains the questions and answers related to those topics.",
"Furthermore, in order to test cross-question generalization performance within previously seen topics, we manually selected 16 held-out questions that are distributed over the remaining 10 topics. We selected the held-out questions manually because we wanted to make sure that they are truly unseen and that no paraphrases of the questions are found in the training set.",
"We designated Italian as a test-only language, since relatively few comments have been written in Italian. From the remaining German and French data we randomly selected a percentage of respondents as validation or as test respondents.",
"As a result we received one training set, one validation set and four test sets. The sizes of the sets are listed in Table TABREF8. We did not consider test sets that are cross-lingual and cross-target at the same time, as they would have been too small to yield significant results."
],
[
"Some observations regarding the composition of x-stance can be made."
],
[
"Figure FIGREF23 visualizes the proportion of `favor' and `against` stances for each target in the dataset. The ratio differs between questions but is relatively equally distributed across the topics. In particular, the questions belonging to the held-out topics (with a `favor' ratio of 49.4%) have a similar class distribution as the questions within other topics (with a `favor' ratio of 50.0%)."
],
[
"Not every question is unique; some questions are paraphrases describing the same political issue. For example, in the 2015 election, the candidates were asked: “Should the consumption of cannabis as well as its possession for personal use be legalised?” Four years later they were asked: “Should cannabis use be legalized?” However, we do not see any need to consolidate those duplicates because they contribute to the diversity of the training data.",
"We further observe that while some questions in the dataset are quite short, some questions are rather convoluted. For example, a typical long question reads:",
"Some 1% of direct payments to Swiss agriculture currently go to organic farming operations. Should this proportion be increased at the expense of standard farming operations as part of Switzerland's 2014-2017 agricultural policy?",
"Such longer questions might be more challenging to process semantically."
],
[
"The x-stance dataset has more German samples than French samples. The language ratio of about 3:1 is consistent across all training and test sets. Given the two languages it is possible to either train two monolingual models or to train a single model in a multi-source setup BIBREF16. We choose a multi-source baseline because M-Bert is known to benefit from multilingual training data both in a supervised and in a cross-lingual scenario BIBREF17."
],
[
"We evaluate two types of baselines to obtain an impression of the difficulty of the task."
],
[
"The first pair of baselines uses the most frequent class in the training set for prediction. Specifically, the global majority class baseline predicts the most frequent class across all training targets while the target-wise majority class baseline predicts the class that is most frequent for a given target question. The latter can only be applied to the supervised test set."
],
[
"Secondly, we fine-tune multilingual Bert (M-Bert) on the task BIBREF5 which has been pretrained jointly in 104 languages and has established itself as a state of the art for various multilingual tasks BIBREF18, BIBREF19. Within the field of stance detection, Bert can outperform both feature-based and other neural approaches in a monolingual English setting BIBREF10."
],
[
"In the context of Bert we interpret the x-stance task as sequence pair classification inspired by natural language inference tasks BIBREF20. We follow the procedure outlined by BIBREF5 for such tasks. We designate the question as segment A and the comment as segment B. The two segments are separated with the special token [SEP], and the special token [CLS] is prepended to the sequence. The final hidden state corresponding to [CLS] is then classified by a linear layer.",
"We fine-tune the full model with a cross-entropy loss, using the AllenNLP library BIBREF21 as a basis for our implementation."
],
[
"We upsampled the `favor' class so that the two classes are balanced when summing over all questions and topics. A maximum sequence length of 512 subwords and a batch size of 16 was chosen for all training runs. We then performed a grid search within the following range of hyperparameters based on the validation accuracy:",
"Learning rate: 5e-5, 3e-5, 2e-5",
"Number of epochs: 3, 4",
"The grid search was repeated independently for every variant that we tested. Furthermore, the standard recommendations for fine-tuning Bert were used: Adam with ${\\beta }_1=0.9$ and ${\\beta }_2=0.999$; an L2 weight decay of $0.01$; a learning rate warmup over the first 10% of the steps; and a linear decay of the learning rate. A dropout probability of 0.1 was set on all layers."
],
[
"Table TABREF36 shows the results for the cross-lingual setting. M-Bert performs consistently better than the majority class baselines. Even the zero-short performance in Italian, while significantly lower than the supervised scores, is much better than the target-wise majority class baseline.",
"Results for the cross-target setting are given in Table TABREF37. Similar to the cross-lingual setting, M-Bert performs worse in a cross-target setting but easily surpasses the majority class baselines. Furthermore, the cross-question score of M-Bert is slightly lower than the cross-topic score."
],
[
"The default setup preserves horizontal language consistency in that the language of the questions always matches the language of the comments. For example, the Italian test instances are combined with the Italian version of the questions, even though during training the model has only ever seen the German and French versions of the questions.",
"An alternative concept is vertical language consistency, whereby the questions are consistently presented in one language, regardless of the comment. To test whether horizontal or vertical consistency is more helpful, we train and evaluate M-Bert on a dataset variant where all questions are in their English version. We chose English as a lingua franca because it had the largest share of data during the pretraining of M-Bert.",
"The results are shown in Table TABREF39. While the effect is negligible in most settings, the cross-lingual performance clearly increases when all questions are given in English."
],
[
"In order to rule out that only the questions or only the comments are necessary to optimally solve the task, we conduct some additional experiments:",
"Only use a single segment containing the comment, removing the questions from the training and test data (missing questions).",
"Only use the question and remove the comment (missing comments).",
"In both cases the performance decreases across all evaluation settings (Table TABREF39). The loss in performance is much higher when comments are missing, indicating that the comments contain the most important information about stance. As can be expected, the score achieved without comments is only slightly different from the target-wise majority class baseline.",
"But there is also a loss in performance when the questions are missing, which underlines the importance of pairing both pieces of text. The effect of missing questions is especially strong in the supervised and cross-lingual settings. To illustrate this, we provide in Table TABREF48 some examples of comments that occur with multiple different targets in the training set. Those examples can explain why the target can be essential for disambiguating a stance detection problem. On the other hand, the effect of omitting the questions is less pronounced in the cross-target settings.",
"The above single-segment experiments tell us that both the comment and the question provide crucial information. But it is possible that the M-Bert model, even though trained on both segments, mainly looks at a single segment at test time. To rule this out, we probe the model with randomized data at test time:",
"Test the model on versions of the test sets where the comments remain in place but the questions are shuffled randomly (random questions). We make sure that the random questions come from the same test set and language as the original questions.",
"Keep the questions in place and randomize the comments (random comments). Again we shuffle the comments only within test set boundaries.",
"The results in Table TABREF39 show that the performance of the model decreases in both cases, confirming that it learns to take into account both segments."
],
[
"Finally we test whether the target really needs to be represented by natural language (e.g. “Do you support X?”). Namely, an alternative is to represent the target with a trainable embedding instead of a question.",
"In order to fit target embeddings smoothly into our architecture, we represent each target type with a different reserved symbol from the M-Bert vocabulary. Segment A is then set to this symbol instead of a natural language question.",
"The results for this experiment are listed in the bottom row of Table TABREF39. An M-Bert model that learns target embeddings instead of encoding a question performs clearly worse in the supervised and cross-lingual settings. From this we conclude that spelled-out natural language questions provide important linguistic detail that can help in stance detection."
],
[
"The baseline experiments confirm that M-Bert can achieve a reasonable accuracy on x-stance.",
"To put the supervised score into context we list scores that variants of Bert have achieved on other stance detection datasets in Table TABREF46. It seems that the supervised part of x-stance has a similar difficulty as the SemEval-2016 BIBREF0 or MPCHI BIBREF22 datasets on which Bert has previously been evaluated.",
"On the other hand, in the cross-lingual and cross-target settings, the mean score drops by 6–8 percentage points compared to the supervised setting; while zero-shot transfer is possible to a degree, it can still be improved.",
"The additional experiments (Table TABREF39) validate the results and show that the sequence-pair classification approach to stance detection is justified.",
"It is interesting to see what errors the M-Bert model makes. Table TABREF47 presents instances where it predicts the wrong label with a high confidence. These examples indicate that many comments express their stance only on a very implicit level, and thus hint at a potential weakness of the dataset. Because on the voting advice platform the label is explicitly shown to readers in addition to the comments, the comments do not need to express the stance explicitly. Manual annotation could eliminate very implicit samples in a future version of the dataset. However, the sheer size and breadth of the dataset could not realistically be achieved with manual annotation, and, in our view, largely compensates for the implicitness of the texts."
],
[
"We have presented a new dataset for political stance detection called x-stance. The dataset extends over a broad range of topics and issues regarding national Swiss politics. This diversity of topics opens up an opportunity to further study multi-target learning. Moreover, being partly Swiss Standard German, partly French and Italian, the dataset promotes a multilingual approach to stance detection.",
"By compiling formal commentary that politicians have written on political questions, we add a new text genre to the field of stance detection. We also propose a question–answer format that allows us to condition stance detection models on a target naturally.",
"Our baseline results with multilingual Bert show that the model has some capability to perform zero-shot transfer to unseen languages and to unseen targets (both within a topic and to unseen topics). However, there is some gap in performance that future work could address. We expect that the x-stance dataset could furthermore be a valuable resource for fields such as argument mining, argument search or topic classification."
],
[
"This work was funded by the Swiss National Science Foundation (project MUTAMUR; no. 176727). We would like to thank Isabelle Augenstein for helpful feedback."
],
[
"In order to study the automatic detection of stances on political issues, questions and candidate responses on the voting advice application smartvote.ch were downloaded. Mainly data pertaining to national-level issues were included to reduce variability."
],
[
"The training set consists of questions and answers in Swiss Standard German and Swiss French (74.1% de-CH; 25.9% fr-CH). The test sets also contain questions and answers in Swiss Italian (67.1% de-CH; 24.7% fr-CH; 8.2% it-CH). The questions have also been translated into English."
],
[
"Candidates for communal, cantonal or national elections in Switzerland who have filled out an online questionnaire.",
"Age: 18 or older – mixed.",
"Gender: Unknown – mixed.",
"Race/ethnicity: Unknown – mixed.",
"Native language: Unknown – mixed.",
"Socioeconomic status: Unknown – mixed.",
"Different speakers represented: 7581.",
"Presence of disordered speech: Unknown."
],
[
"The questions were edited and translated by political scientists for a public voting advice website.",
"The answers were written between 2011 and 2020 by the users of the website."
],
[
"Questions, answers, arguments and comments regarding political issues."
]
],
"section_name": [
"Introduction",
"Related Work ::: Multilingual Stance Detection",
"Related Work ::: Multi-Target Stance Detection",
"Related Work ::: Representation Learning for Stance Detection",
"The x-stance Dataset ::: Task Definition",
"The x-stance Dataset ::: Data Collection ::: Provenance",
"The x-stance Dataset ::: Data Collection ::: Preprocessing",
"The x-stance Dataset ::: Data Collection ::: Language Identification",
"The x-stance Dataset ::: Data Collection ::: Filtering",
"The x-stance Dataset ::: Data Collection ::: Topics",
"The x-stance Dataset ::: Data Collection ::: Compliance",
"The x-stance Dataset ::: Data Split",
"The x-stance Dataset ::: Analysis",
"The x-stance Dataset ::: Analysis ::: Class Distribution",
"The x-stance Dataset ::: Analysis ::: Linguistic Properties",
"The x-stance Dataset ::: Analysis ::: Languages",
"Baseline Experiments",
"Baseline Experiments ::: Majority Class Baselines",
"Baseline Experiments ::: Multilingual BERT Baseline",
"Baseline Experiments ::: Multilingual BERT Baseline ::: Architecture",
"Baseline Experiments ::: Multilingual BERT Baseline ::: Training",
"Baseline Experiments ::: Multilingual BERT Baseline ::: Results",
"Baseline Experiments ::: How Important is Consistent Language?",
"Baseline Experiments ::: How Important are the Segments?",
"Baseline Experiments ::: How Important are Spelled-Out Targets?",
"Discussion",
"Conclusion",
"Acknowledgments",
"Data Statement ::: Curation rationale",
"Data Statement ::: Language variety",
"Data Statement ::: Speaker demographic (answers)",
"Data Statement ::: Speech situation",
"Data Statement ::: Text characteristics"
]
} | {
"answers": [
{
"annotation_id": [
"3d18242756d84116b9b3c42bbc177c5e5d90831c",
"53ccedf6d49f660aef3570c9633929102cf37dc9",
"922a437305d206913df9146c010ac0fb7b08b617"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 3: Baseline scores in the cross-lingual setting. No Italian samples were seen during training, making this a case of zero-shot cross-lingual transfer. The scores are reported as the macro-average of the F1scores for ‘favor’ and for ‘against’."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"FLOAT SELECTED: Table 3: Baseline scores in the cross-lingual setting. No Italian samples were seen during training, making this a case of zero-shot cross-lingual transfer. The scores are reported as the macro-average of the F1scores for ‘favor’ and for ‘against’."
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 4: Baseline scores in the cross-target setting. For each test set we separately report a German and a French score, as well as their harmonic mean."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"FLOAT SELECTED: Table 4: Baseline scores in the cross-target setting. For each test set we separately report a German and a French score, as well as their harmonic mean."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"fa716cd87ce6fd6905e2f23f09b262e90413167f"
]
},
{
"annotation_id": [
"2fc2cebbb151f75fb2d00fd9fd37d829bff306fa",
"3f5b2ecb4b40eb3a95aa35c5bedbce66c0d79488"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 6: Performance of BERT-like models on different supervised stance detection benchmarks."
],
"extractive_spans": [],
"free_form_answer": "M-Bert had 76.6 F1 macro score.",
"highlighted_evidence": [
"FLOAT SELECTED: Table 6: Performance of BERT-like models on different supervised stance detection benchmarks."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 6: Performance of BERT-like models on different supervised stance detection benchmarks."
],
"extractive_spans": [],
"free_form_answer": "75.1% and 75.6% accuracy",
"highlighted_evidence": [
"FLOAT SELECTED: Table 6: Performance of BERT-like models on different supervised stance detection benchmarks."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"fa716cd87ce6fd6905e2f23f09b262e90413167f"
]
},
{
"annotation_id": [
"fc66343ef12a4e422e24eb6c90a1362542aa7bb3"
],
"answer": [
{
"evidence": [
"An alternative concept is vertical language consistency, whereby the questions are consistently presented in one language, regardless of the comment. To test whether horizontal or vertical consistency is more helpful, we train and evaluate M-Bert on a dataset variant where all questions are in their English version. We chose English as a lingua franca because it had the largest share of data during the pretraining of M-Bert."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"To test whether horizontal or vertical consistency is more helpful, we train and evaluate M-Bert on a dataset variant where all questions are in their English version."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"0893909bc79a5207a33614645c31d9d9ae4e0bdc"
],
"answer": [
{
"evidence": [
"To put the supervised score into context we list scores that variants of Bert have achieved on other stance detection datasets in Table TABREF46. It seems that the supervised part of x-stance has a similar difficulty as the SemEval-2016 BIBREF0 or MPCHI BIBREF22 datasets on which Bert has previously been evaluated.",
"FLOAT SELECTED: Table 6: Performance of BERT-like models on different supervised stance detection benchmarks."
],
"extractive_spans": [],
"free_form_answer": "BERT had 76.6 F1 macro score on x-stance dataset.",
"highlighted_evidence": [
"To put the supervised score into context we list scores that variants of Bert have achieved on other stance detection datasets in Table TABREF46.",
"FLOAT SELECTED: Table 6: Performance of BERT-like models on different supervised stance detection benchmarks."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"3abdeb3cc23d2bd39614357c0c243d5583b25132"
],
"answer": [
{
"evidence": [
"All candidates in an election who participate in Smartvote are asked the same set of questions, but depending on the locale they see translated versions of the questions. They can answer each question with either `yes', `rather yes', `rather no', or `no'. They can supplement each answer with a comment of at most 500 characters."
],
"extractive_spans": [
"answer each question with either `yes', `rather yes', `rather no', or `no'.",
"can supplement each answer with a comment of at most 500 characters"
],
"free_form_answer": "",
"highlighted_evidence": [
"They can answer each question with either `yes', `rather yes', `rather no', or `no'. They can supplement each answer with a comment of at most 500 characters."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"two",
"two",
"zero",
"zero",
"zero"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no"
],
"question": [
"Does the paper report the performance of the model for each individual language?",
"What is the performance of the baseline?",
"Did they pefrorm any cross-lingual vs single language evaluation?",
"What was the performance of multilingual BERT?",
"What annotations are present in dataset?"
],
"question_id": [
"35b3ce3a7499070e9b280f52e2cb0c29b0745380",
"71ba1b09bb03f5977d790d91702481cc406b3767",
"612c3675b6c55b60ae6d24265ed8e20f62cb117e",
"bd40f33452da7711b65faaa248aca359b27fddb6",
"787c4d4628eac00dbceb1c96020bff0090edca46"
],
"question_writer": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"search_query": [
"Italian",
"Italian",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: Example of a question and two answers in the X-stance dataset. The answers were submitted by electoral candidates on a voting advice website. The author of the comment in German declared to be in favor of the issue and added the comment as an explanation. The comment in French has been written by another candidate to explain why a negative stance has been taken towards the issue. While X-stance contains hundreds of answers to this question, we have picked the two comments as examples due to their brevity.",
"Table 1: Number of questions and answers per topic.",
"Table 2: Number of answer instances in the training, validation and test sets. The upper left corner represents a multilingually supervised task, where training, validation and test data are from exactly the same domain. The topto-bottom axis gives rise to a cross-lingual transfer task, where a model trained on German and French is evaluated on Italian answers to the same questions. The left-to-right axis represents a continuous shift of domain: In the middle column, the model is tested on previously unseen questions that belong to the same topics as seen during training. In the right column the model encounters unseen answers to unseen questions within an unseen topic. The two test sets in parentheses are too small for a significant evaluation.",
"Figure 2: Proportion of ‘favor’ labels per question, grouped by topic. While the proportion of favorable answers varies from question to question, it is balanced overall.",
"Table 3: Baseline scores in the cross-lingual setting. No Italian samples were seen during training, making this a case of zero-shot cross-lingual transfer. The scores are reported as the macro-average of the F1scores for ‘favor’ and for ‘against’.",
"Table 4: Baseline scores in the cross-target setting. For each test set we separately report a German and a French score, as well as their harmonic mean.",
"Table 5: Results for additional experiments. The cross-lingual score is the F1-score on the Italian test set. For the supervised, cross-question and cross-topic settings we report the harmonic mean of the German and French scores.",
"Table 6: Performance of BERT-like models on different supervised stance detection benchmarks."
],
"file": [
"2-Figure1-1.png",
"3-Table1-1.png",
"4-Table2-1.png",
"5-Figure2-1.png",
"6-Table3-1.png",
"7-Table4-1.png",
"7-Table5-1.png",
"8-Table6-1.png"
]
} | [
"What is the performance of the baseline?",
"What was the performance of multilingual BERT?"
] | [
[
"2003.08385-8-Table6-1.png"
],
[
"2003.08385-8-Table6-1.png",
"2003.08385-Discussion-1"
]
] | [
"75.1% and 75.6% accuracy",
"BERT had 76.6 F1 macro score on x-stance dataset."
] | 209 |
1901.10133 | Structuring an unordered text document | Segmenting an unordered text document into different sections is a very useful task in many text processing applications like multiple document summarization, question answering, etc. This paper proposes structuring of an unordered text document based on the keywords in the document. We test our approach on Wikipedia documents using both statistical and predictive methods such as the TextRank algorithm and Google's USE (Universal Sentence Encoder). From our experimental results, we show that the proposed model can effectively structure an unordered document into sections. | {
"paragraphs": [
[
"To structure an unordered document is an essential task in many applications. It is a post-requisite for applications like multiple document extractive text summarization where we have to present a summary of multiple documents. It is a prerequisite for applications like question answering from multiple documents where we have to present an answer by processing multiple documents. In this paper, we address the task of segmenting an unordered text document into different sections. The input document/summary that may have unordered sentences is processed so that it will have sentences clustered together. Clustering is based on the similarity with the respective keyword as well as with the sentences belonging to the same cluster. Keywords are identified and clusters are formed for each keyword.",
"We use TextRank algorithm BIBREF0 to extract the keywords from a text document. TextRank is a graph-based ranking algorithm which decides the importance of a vertex within a graph by considering the global information recursively computed from the entire graph rather than focusing on local vertex specific information. The model uses knowledge acquired from the entire text to extract the keywords. A cluster is generated for every keyword.",
"While generating clusters, the similarity between a sentence and a topic/keyword is calculated using the cosine similarity of embeddings generated using Google's USE (Universal Sentence Encoder) BIBREF1 . USE is claimed to have better performance of transfer learning when used with sentence-level embeddings as compared to word-level embeddings alone. This model is claimed to have better performance even if there is less task-specific training data available. We observed that the quality of clusters/section is better if the similarity of a sentence with the keyword is considered along with the similarity of the sentence with the sentences already available in the respective section.",
"To test our approach, we jumble the ordering of sentences in a document, process the unordered document and compare the similarity of the output document with the original document."
],
[
"Several models have been performed in the past to retrieve sentences of a document belonging to a particular topic BIBREF2 . Given a topic, retrieving sentences that may belong to that topic should be considered as a different task than what we aim in this paper. A graph based approach for extracting information relevant to a query is presented in BIBREF3 , where subgraphs are built using the relatedness of the sentences to the query. An incremental integrated graph to represent the sentences in a collection of documents is presented in BIBREF4 , BIBREF5 . Sentences from the documents are merged into a master sequence to improve coherence and flow. The same ordering is used for sequencing the sentences in the extracted summary. Ordering of sentences in a document is discussed in BIBREF6 .",
"In this paper, we aim to generate the sections/clusters from an unordered document. To the best of our knowledge, this is a first attempt to address this problem formally."
],
[
"Our methodology is described in the Figure 1 . The process starts by taking an unordered document as an input. The next step is to extract the keywords from the input document using TextRank algorithm BIBREF0 and store them in a list $K$ . The keywords stored in $K$ act as centroids for the clusters. Note that, the quality of keywords extracted will have a bearing on the final results. In this paper, we present a model that can be used for structuring an unstructured document. In the process, we use a popular keyword extraction algorithm. Our model is not bound to TextRank, and if a better keyword extraction algorithm is available, it can replace TextRank.",
"The next step is to find the most relevant sentence for each keyword in $K$ . The most relevant sentence is mapped to the keyword and assigned in the respective cluster. This similarity between the keyword and the sentence is calculated by the cosine similarity of embeddings generated from Google's USE. Now, we have a list of keywords $K$ , and a sentence mapped to each keyword. The next step is to map the remaining sentences. In the next step, we go through all the sentences in the document that have not been mapped yet and find the relevant keywords that can be mapped with them. We do this by the following procedure:",
" $S_1(x,y)$ is the similarity between the current sentence $x$ and the keyword $y$ . $S_2(x,y)$ is the maximum similarity between the current sentence $x$ and the sentences that are already mapped with the keyword $y$ . If $y$ has three sentences mapped to it, then the similarity between $x$ and the sentences mapped to $y$ are computed, and the maximum similarity among the three is assigned to $S_2(x,y)$ . The overall similarity $x$0 is calculated as: ",
"$$S(x,y)= t*S_1(x,y) + (1-t)*S_2(x,y)$$ (Eq. 1) ",
"We map every other sentence $x$ to the keyword with which they have maximum similarity $Max_y (S(x,y))$ . Later, we cluster the keywords along with the sentences mapped to them. The value of $t$ signifies the importance to be given to the keyword and its associated sentences respectively. In our experiments, we empirically fix the value of $t$ to $0.25$ ."
],
[
"To evaluate our algorithm, we propose two similarity metrics, $Sim1$ and $Sim2$ . These metrics compute the similarity of each section of the original document with all the sections/clusters (keyword and the sentences mapped to it) of the output document and assign the maximum similarity. $Sim1$ between an input section and an output section is calculated as the number of sentences of the input section that are present in the output section divided by the total number of sentences in the input section. To calculate the final similarity (similarity of the entire output document) we take the weighted mean of similarity calculated corresponding to each input section. $Sim2$ between an input section and an output section is computed as the number of sentences of an input section that are present in an output section divided by the sum of sentences in the input and output sections. The final similarity is computed in a similar manner."
],
[
"For our experiments, we prepared five sets of documents. Each set has 100 wiki documents (randomly chosen). Each document is restructured randomly (sentences are rearranged randomly). This restructured document is the input to our model and the output document is compared against the original input document.",
"Also, we compare our results with the baseline being the results when we consider only the similarity between sentences and keywords. The results are shown in Table 1 . Here, both $Sim1$ and $Sim2$ are the mean of similarities of the entire Set. $BaseSim1$ and $BaseSim1$ are computed similar to $Sim1$ and $Sim2$ respectively but with the $t$ value as 1 in Equation 1 . It is evident that the results are better if both similarities ( $S_1(x,y)$ and $S_2(x,y)$ ) are considered."
],
[
"We proposed an efficient model to structure an unordered document. We evaluated our model against the baseline and found that our proposed model has a significant improvement. We observed that while ordering an unordered document, the initial sentences associated with a keyword/topic play a significant role."
]
],
"section_name": [
"Introduction",
"Related Work",
"Proposed model",
"Metrics to evaluate our algorithm",
"Experimental setup and Results",
"Conclusions"
]
} | {
"answers": [
{
"annotation_id": [
"f334417f71605184e010a9ff6fbce45da123f003"
],
"answer": [
{
"evidence": [
"To structure an unordered document is an essential task in many applications. It is a post-requisite for applications like multiple document extractive text summarization where we have to present a summary of multiple documents. It is a prerequisite for applications like question answering from multiple documents where we have to present an answer by processing multiple documents. In this paper, we address the task of segmenting an unordered text document into different sections. The input document/summary that may have unordered sentences is processed so that it will have sentences clustered together. Clustering is based on the similarity with the respective keyword as well as with the sentences belonging to the same cluster. Keywords are identified and clusters are formed for each keyword.",
"To test our approach, we jumble the ordering of sentences in a document, process the unordered document and compare the similarity of the output document with the original document."
],
"extractive_spans": [],
"free_form_answer": "A unordered text document is one where sentences in the document are disordered or jumbled. It doesn't appear that unordered text documents appear in corpora, but rather are introduced as part of processing pipeline.",
"highlighted_evidence": [
"The input document/summary that may have unordered sentences is processed so that it will have sentences clustered together.",
"To test our approach, we jumble the ordering of sentences in a document, process the unordered document and compare the similarity of the output document with the original document.",
"To structure an unordered document is an essential task in many applications. It is a post-requisite for applications like multiple document extractive text summarization where we have to present a summary of multiple documents. It is a prerequisite for applications like question answering from multiple documents where we have to present an answer by processing multiple documents."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"74eea9f3f4f790836045fcc75d0b3f5156901499"
]
},
{
"annotation_id": [
"ccd9fcfc902dda82ed7b28bff7b62811e3967794"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Fig. 1. Proposed methodology"
],
"extractive_spans": [
"Our methodology is described in the Figure 1 "
],
"free_form_answer": "",
"highlighted_evidence": [
"FLOAT SELECTED: Fig. 1. Proposed methodology"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"74eea9f3f4f790836045fcc75d0b3f5156901499"
]
},
{
"annotation_id": [
"0a5f4b88d1027de43343cab32408966e6cc1b603",
"5023831ba72f571399617c90851ca532274bad87"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": false,
"yes_no": false
},
{
"evidence": [
"For our experiments, we prepared five sets of documents. Each set has 100 wiki documents (randomly chosen). Each document is restructured randomly (sentences are rearranged randomly). This restructured document is the input to our model and the output document is compared against the original input document."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"For our experiments, we prepared five sets of documents. Each set has 100 wiki documents (randomly chosen). Each document is restructured randomly (sentences are rearranged randomly). This restructured document is the input to our model and the output document is compared against the original input document."
],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe",
"74eea9f3f4f790836045fcc75d0b3f5156901499"
]
},
{
"annotation_id": [
"47b680525ebe365a8ff53646974b5170048fd123",
"6a950ce5f7d143dc73830502b9740d77b442fdd3"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": false,
"yes_no": false
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe",
"74eea9f3f4f790836045fcc75d0b3f5156901499"
]
},
{
"annotation_id": [
"53903ee60627effb0f5440d1c6d5cc28e437392c",
"b4a8c50208bc8f23eb02ef6a87bb6445cb40641f"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"74eea9f3f4f790836045fcc75d0b3f5156901499",
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no"
],
"question": [
"What is an unordered text document, do these arise in real-world corpora?",
"What kind of model do they use?",
"Do they release a data set?",
"Do they release code?",
"Which languages do they evaluate on?"
],
"question_id": [
"559920ebe19e99e43418c2f0455a0ffdc8edaaa2",
"43f56301c5d2f50b6449b582652f2351cbe90e70",
"68cd8433a75eee7d100dbbfedebbf53873a21720",
"0a01341ddca3fe4c8727be7cf5841091c6ca3d0b",
"a8c862d8c5da3f060eedf3395a3d0a97fd8a12d8"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"wikipedia",
"wikipedia",
"wikipedia",
"wikipedia",
"wikipedia"
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"TABLE 1 Results",
"Fig. 1. Proposed methodology"
],
"file": [
"2-Table1-1.png",
"2-Figure1-1.png"
]
} | [
"What is an unordered text document, do these arise in real-world corpora?"
] | [
[
"1901.10133-Introduction-0",
"1901.10133-Introduction-3"
]
] | [
"A unordered text document is one where sentences in the document are disordered or jumbled. It doesn't appear that unordered text documents appear in corpora, but rather are introduced as part of processing pipeline."
] | 210 |
1911.00841 | Question Answering for Privacy Policies: Combining Computational and Legal Perspectives | Privacy policies are long and complex documents that are difficult for users to read and understand, and yet, they have legal effects on how user data is collected, managed and used. Ideally, we would like to empower users to inform themselves about issues that matter to them, and enable them to selectively explore those issues. We present PrivacyQA, a corpus consisting of 1750 questions about the privacy policies of mobile applications, and over 3500 expert annotations of relevant answers. We observe that a strong neural baseline underperforms human performance by almost 0.3 F1 on PrivacyQA, suggesting considerable room for improvement for future systems. Further, we use this dataset to shed light on challenges to question answerability, with domain-general implications for any question answering system. The PrivacyQA corpus offers a challenging corpus for question answering, with genuine real-world utility. | {
"paragraphs": [
[
"Privacy policies are the documents which disclose the ways in which a company gathers, uses, shares and manages a user's data. As legal documents, they function using the principle of notice and choice BIBREF0, where companies post their policies, and theoretically, users read the policies and decide to use a company's products or services only if they find the conditions outlined in its privacy policy acceptable. Many legal jurisdictions around the world accept this framework, including the United States and the European Union BIBREF1, BIBREF2. However, the legitimacy of this framework depends upon users actually reading and understanding privacy policies to determine whether company practices are acceptable to them BIBREF3. In practice this is seldom the case BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10. This is further complicated by the highly individual and nuanced compromises that users are willing to make with their data BIBREF11, discouraging a `one-size-fits-all' approach to notice of data practices in privacy documents.",
"With devices constantly monitoring our environment, including our personal space and our bodies, lack of awareness of how our data is being used easily leads to problematic situations where users are outraged by information misuse, but companies insist that users have consented. The discovery of increasingly egregious uses of data by companies, such as the scandals involving Facebook and Cambridge Analytica BIBREF12, have further brought public attention to the privacy concerns of the internet and ubiquitous computing. This makes privacy a well-motivated application domain for NLP researchers, where advances in enabling users to quickly identify the privacy issues most salient to them can potentially have large real-world impact.",
"[1]https://play.google.com/store/apps/details?id=com.gotokeep.keep.intl [2]https://play.google.com/store/apps/details?id=com.viber.voip [3]A question might not have any supporting evidence for an answer within the privacy policy.",
"Motivated by this need, we contribute PrivacyQA, a corpus consisting of 1750 questions about the contents of privacy policies, paired with over 3500 expert annotations. The goal of this effort is to kickstart the development of question-answering methods for this domain, to address the (unrealistic) expectation that a large population should be reading many policies per day. In doing so, we identify several understudied challenges to our ability to answer these questions, with broad implications for systems seeking to serve users' information-seeking intent. By releasing this resource, we hope to provide an impetus to develop systems capable of language understanding in this increasingly important domain."
],
[
"Prior work has aimed to make privacy policies easier to understand. Prescriptive approaches towards communicating privacy information BIBREF21, BIBREF22, BIBREF23 have not been widely adopted by industry. Recently, there have been significant research effort devoted to understanding privacy policies by leveraging NLP techniques BIBREF24, BIBREF25, BIBREF26, BIBREF27, BIBREF28, especially by identifying specific data practices within a privacy policy. We adopt a personalized approach to understanding privacy policies, that allows users to query a document and selectively explore content salient to them. Most similar is the PolisisQA corpus BIBREF29, which examines questions users ask corporations on Twitter. Our approach differs in several ways: 1) The PrivacyQA dataset is larger, containing 10x as many questions and answers. 2) Answers are formulated by domain experts with legal training. 3) PrivacyQA includes diverse question types, including unanswerable and subjective questions.",
"Our work is also related to reading comprehension in the open domain, which is frequently based upon Wikipedia passages BIBREF16, BIBREF17, BIBREF15, BIBREF30 and news articles BIBREF20, BIBREF31, BIBREF32. Table.TABREF4 presents the desirable attributes our dataset shares with past approaches. This work is also tied into research in applying NLP approaches to legal documents BIBREF33, BIBREF34, BIBREF35, BIBREF36, BIBREF37, BIBREF38, BIBREF39. While privacy policies have legal implications, their intended audience consists of the general public rather than individuals with legal expertise. This arrangement is problematic because the entities that write privacy policies often have different goals than the audience. feng2015applying, tan-EtAl:2016:P16-1 examine question answering in the insurance domain, another specialized domain similar to privacy, where the intended audience is the general public."
],
[
"We describe the data collection methodology used to construct PrivacyQA. With the goal of achieving broad coverage across application types, we collect privacy policies from 35 mobile applications representing a number of different categories in the Google Play Store. One of our goals is to include both policies from well-known applications, which are likely to have carefully-constructed privacy policies, and lesser-known applications with smaller install bases, whose policies might be considerably less sophisticated. Thus, setting 5 million installs as a threshold, we ensure each category includes applications with installs on both sides of this threshold. All policies included in the corpus are in English, and were collected before April 1, 2018, predating many companies' GDPR-focused BIBREF41 updates. We leave it to future studies BIBREF42 to look at the impact of the GDPR (e.g., to what extent GDPR requirements contribute to making it possible to provide users with more informative answers, and to what extent their disclosures continue to omit issues that matter to users)."
],
[
"The intended audience for privacy policies consists of the general public. This informs the decision to elicit questions from crowdworkers on the contents of privacy policies. We choose not to show the contents of privacy policies to crowdworkers, a procedure motivated by a desire to avoid inadvertent biases BIBREF43, BIBREF44, BIBREF45, BIBREF46, BIBREF47, and encourage crowdworkers to ask a variety of questions beyond only asking questions based on practices described in the document.",
"Instead, crowdworkers are presented with public information about a mobile application available on the Google Play Store including its name, description and navigable screenshots. Figure FIGREF9 shows an example of our user interface. Crowdworkers are asked to imagine they have access to a trusted third-party privacy assistant, to whom they can ask any privacy question about a given mobile application. We use the Amazon Mechanical Turk platform and recruit crowdworkers who have been conferred “master” status and are located within the United States of America. Turkers are asked to provide five questions per mobile application, and are paid $2 per assignment, taking ~eight minutes to complete the task."
],
[
"To identify legally sound answers, we recruit seven experts with legal training to construct answers to Turker questions. Experts identify relevant evidence within the privacy policy, as well as provide meta-annotation on the question's relevance, subjectivity, OPP-115 category BIBREF49, and how likely any privacy policy is to contain the answer to the question asked."
],
[
"Table.TABREF17 presents aggregate statistics of the PrivacyQA dataset. 1750 questions are posed to our imaginary privacy assistant over 35 mobile applications and their associated privacy documents. As an initial step, we formulate the problem of answering user questions as an extractive sentence selection task, ignoring for now background knowledge, statistical data and legal expertise that could otherwise be brought to bear. The dataset is partitioned into a training set featuring 27 mobile applications and 1350 questions, and a test set consisting of 400 questions over 8 policy documents. This ensures that documents in training and test splits are mutually exclusive. Every question is answered by at least one expert. In addition, in order to estimate annotation reliability and provide for better evaluation, every question in the test set is answered by at least two additional experts.",
"Table TABREF14 describes the distribution over first words of questions posed by crowdworkers. We also observe low redundancy in the questions posed by crowdworkers over each policy, with each policy receiving ~49.94 unique questions despite crowdworkers independently posing questions. Questions are on average 8.4 words long. As declining to answer a question can be a legally sound response but is seldom practically useful, answers to questions where a minority of experts abstain to answer are filtered from the dataset. Privacy policies are ~3000 words long on average. The answers to the question asked by the users typically have ~100 words of evidence in the privacy policy document."
],
[
"Questions are organized under nine categories from the OPP-115 Corpus annotation scheme BIBREF49:",
"",
"First Party Collection/Use: What, why and how information is collected by the service provider",
"Third Party Sharing/Collection: What, why and how information shared with or collected by third parties",
"Data Security: Protection measures for user information",
"Data Retention: How long user information will be stored",
"User Choice/Control: Control options available to users",
"User Access, Edit and Deletion: If/how users can access, edit or delete information",
"Policy Change: Informing users if policy information has been changed",
"International and Specific Audiences: Practices pertaining to a specific group of users",
"Other: General text, contact information or practices not covered by other categories.",
"For each question, domain experts indicate one or more relevant OPP-115 categories. We mark a category as relevant to a question if it is identified as such by at least two annotators. If no such category exists, the category is marked as `Other' if atleast one annotator has identified the `Other' category to be relevant. If neither of these conditions is satisfied, we label the question as having no agreement. The distribution of questions in the corpus across OPP-115 categories is as shown in Table.TABREF16. First party and third party related questions are the largest categories, forming nearly 66.4% of all questions asked to the privacy assistant."
],
[
"When do experts disagree? We would like to analyze the reasons for potential disagreement on the annotation task, to ensure disagreements arise due to valid differences in opinion rather than lack of adequate specification in annotation guidelines. It is important to note that the annotators are experts rather than crowdworkers. Accordingly, their judgements can be considered valid, legally-informed opinions even when their perspectives differ. For the sake of this question we randomly sample 100 instances in the test data and analyze them for likely reasons for disagreements. We consider a disagreement to have occurred when more than one expert does not agree with the majority consensus. By disagreement we mean there is no overlap between the text identified as relevant by one expert and another.",
"We find that the annotators agree on the answer for 74% of the questions, even if the supporting evidence they identify is not identical i.e full overlap. They disagree on the remaining 26%. Sources of apparent disagreement correspond to situations when different experts: have differing interpretations of question intent (11%) (for example, when a user asks 'who can contact me through the app', the questions admits multiple interpretations, including seeking information about the features of the app, asking about first party collection/use of data or asking about third party collection/use of data), identify different sources of evidence for questions that ask if a practice is performed or not (4%), have differing interpretations of policy content (3%), identify a partial answer to a question in the privacy policy (2%) (for example, when the user asks `who is allowed to use the app' a majority of our annotators decline to answer, but the remaining annotators highlight partial evidence in the privacy policy which states that children under the age of 13 are not allowed to use the app), and other legitimate sources of disagreement (6%) which include personal subjective views of the annotators (for example, when the user asks `is my DNA information used in any way other than what is specified', some experts consider the boilerplate text of the privacy policy which states that it abides to practices described in the policy document as sufficient evidence to answer this question, whereas others do not)."
],
[
"We evaluate the ability of machine learning methods to identify relevant evidence for questions in the privacy domain. We establish baselines for the subtask of deciding on the answerability (§SECREF33) of a question, as well as the overall task of identifying evidence for questions from policies (§SECREF37). We describe aspects of the question that can render it unanswerable within the privacy domain (§SECREF41)."
],
[
"We define answerability identification as a binary classification task, evaluating model ability to predict if a question can be answered, given a question in isolation. This can serve as a prior for downstream question-answering. We describe three baselines on the answerability task, and find they considerably improve performance over a majority-class baseline.",
"SVM: We define 3 sets of features to characterize each question. The first is a simple bag-of-words set of features over the question (SVM-BOW), the second is bag-of-words features of the question as well as length of the question in words (SVM-BOW + LEN), and lastly we extract bag-of-words features, length of the question in words as well as part-of-speech tags for the question (SVM-BOW + LEN + POS). This results in vectors of 200, 201 and 228 dimensions respectively, which are provided to an SVM with a linear kernel.",
"CNN: We utilize a CNN neural encoder for answerability prediction. We use GloVe word embeddings BIBREF50, and a filter size of 5 with 64 filters to encode questions.",
"BERT: BERT BIBREF51 is a bidirectional transformer-based language-model BIBREF52. We fine-tune BERT-base on our binary answerability identification task with a learning rate of 2e-5 for 3 epochs, with a maximum sequence length of 128."
],
[
"Our goal is to identify evidence within a privacy policy for questions asked by a user. This is framed as an answer sentence selection task, where models identify a set of evidence sentences from all candidate sentences in each policy."
],
[
"Our evaluation metric for answer-sentence selection is sentence-level F1, implemented similar to BIBREF30, BIBREF16. Precision and recall are implemented by measuring the overlap between predicted sentences and sets of gold-reference sentences. We report the average of the maximum F1 from each n$-$1 subset, in relation to the heldout reference."
],
[
"We describe baselines on this task, including a human performance baseline.",
"No-Answer Baseline (NA) : Most of the questions we receive are difficult to answer in a legally-sound way on the basis of information present in the privacy policy. We establish a simple baseline to quantify the effect of identifying every question as unanswerable.",
"Word Count Baseline : To quantify the effect of using simple lexical matching to answer the questions, we retrieve the top candidate policy sentences for each question using a word count baseline BIBREF53, which counts the number of question words that also appear in a sentence. We include the top 2, 3 and 5 candidates as baselines.",
"BERT: We implement two BERT-based baselines BIBREF51 for evidence identification. First, we train BERT on each query-policy sentence pair as a binary classification task to identify if the sentence is evidence for the question or not (Bert). We also experiment with a two-stage classifier, where we separately train the model on questions only to predict answerability. At inference time, if the answerable classifier predicts the question is answerable, the evidence identification classifier produces a set of candidate sentences (Bert + Unanswerable).",
"Human Performance: We pick each reference answer provided by an annotator, and compute the F1 with respect to the remaining references, as described in section 4.2.1. Each reference answer is treated as the prediction, and the remaining n-1 answers are treated as the gold reference. The average of the maximum F1 across all reference answers is computed as the human baseline."
],
[
"The results of the answerability baselines are presented in Table TABREF31, and on answer sentence selection in Table TABREF32. We observe that bert exhibits the best performance on a binary answerability identification task. However, most baselines considerably exceed the performance of a majority-class baseline. This suggests considerable information in the question, indicating it's possible answerability within this domain.",
"Table.TABREF32 describes the performance of our baselines on the answer sentence selection task. The No-answer (NA) baseline performs at 28 F1, providing a lower bound on performance at this task. We observe that our best-performing baseline, Bert + Unanswerable achieves an F1 of 39.8. This suggest that bert is capable of making some progress towards answering questions in this difficult domain, while still leaving considerable headroom for improvement to reach human performance. Bert + Unanswerable performance suggests that incorporating information about answerability can help in this difficult domain. We examine this challenging phenomena of unanswerability further in Section ."
],
[
"Disagreements are analyzed based on the OPP-115 categories of each question (Table.TABREF34). We compare our best performing BERT variant against the NA model and human performance. We observe significant room for improvement across all categories of questions but especially for first party, third party and data retention categories.",
"We analyze the performance of our strongest BERT variant, to identify classes of errors and directions for future improvement (Table.8). We observe that a majority of answerability mistakes made by the BERT model are questions which are in fact answerable, but are identified as unanswerable by BERT. We observe that BERT makes 124 such mistakes on the test set. We collect expert judgments on relevance, subjectivity , silence and information about how likely the question is to be answered from the privacy policy from our experts. We find that most of these mistakes are relevant questions. However many of them were identified as subjective by the annotators, and at least one annotator marked 19 of these questions as having no answer within the privacy policy. However, only 6 of these questions were unexpected or do not usually have an answer in privacy policies. These findings suggest that a more nuanced understanding of answerability might help improve model performance in his challenging domain."
],
[
"We further ask legal experts to identify potential causes of unanswerability of questions. This analysis has considerable implications. While past work BIBREF17 has treated unanswerable questions as homogeneous, a question answering system might wish to have different treatments for different categories of `unanswerable' questions. The following factors were identified to play a role in unanswerability:",
"Incomprehensibility: If a question is incomprehensible to the extent that its meaning is not intelligible.",
"Relevance: Is this question in the scope of what could be answered by reading the privacy policy.",
"Ill-formedness: Is this question ambiguous or vague. An ambiguous statement will typically contain expressions that can refer to multiple potential explanations, whereas a vague statement carries a concept with an unclear or soft definition.",
"Silence: Other policies answer this type of question but this one does not.",
"Atypicality: The question is of a nature such that it is unlikely for any policy policy to have an answer to the question.",
"Our experts attempt to identify the different `unanswerable' factors for all 573 such questions in the corpus. 4.18% of the questions were identified as being incomprehensible (for example, `any difficulties to occupy the privacy assistant'). Amongst the comprehendable questions, 50% were identified as likely to have an answer within the privacy policy, 33.1% were identified as being privacy-related questions but not within the scope of a privacy policy (e.g., 'has Viber had any privacy breaches in the past?') and 16.9% of questions were identified as completely out-of-scope (e.g., `'will the app consume much space?'). In the questions identified as relevant, 32% were ill-formed questions that were phrased by the user in a manner considered vague or ambiguous. Of the questions that were both relevant as well as `well-formed', 95.7% of the questions were not answered by the policy in question but it was reasonable to expect that a privacy policy would contain an answer. The remaining 4.3% were described as reasonable questions, but of a nature generally not discussed in privacy policies. This suggests that the answerability of questions over privacy policies is a complex issue, and future systems should consider each of these factors when serving user's information seeking intent.",
"We examine a large-scale dataset of “natural” unanswerable questions BIBREF54 based on real user search engine queries to identify if similar unanswerability factors exist. It is important to note that these questions have previously been filtered, according to a criteria for bad questions defined as “(questions that are) ambiguous, incomprehensible, dependent on clear false presuppositions, opinion-seeking, or not clearly a request for factual information.” Annotators made the decision based on the content of the question without viewing the equivalent Wikipedia page. We randomly sample 100 questions from the development set which were identified as unanswerable, and find that 20% of the questions are not questions (e.g., “all I want for christmas is you mariah carey tour”). 12% of questions are unlikely to ever contain an answer on Wikipedia, corresponding closely to our atypicality category. 3% of questions are unlikely to have an answer anywhere (e.g., `what guides Santa home after he has delivered presents?'). 7% of questions are incomplete or open-ended (e.g., `the south west wind blows across nigeria between'). 3% of questions have an unresolvable coreference (e.g., `how do i get to Warsaw Missouri from here'). 4% of questions are vague, and a further 7% have unknown sources of error. 2% still contain false presuppositions (e.g., `what is the only fruit that does not have seeds?') and the remaining 42% do not have an answer within the document. This reinforces our belief that though they have been understudied in past work, any question answering system interacting with real users should expect to receive such unanticipated and unanswerable questions."
],
[
"We present PrivacyQA, the first significant corpus of privacy policy questions and more than 3500 expert annotations of relevant answers. The goal of this work is to promote question-answering research in the specialized privacy domain, where it can have large real-world impact. Strong neural baselines on PrivacyQA achieve a performance of only 39.8 F1 on this corpus, indicating considerable room for future research. Further, we shed light on several important considerations that affect the answerability of questions. We hope this contribution leads to multidisciplinary efforts to precisely understand user intent and reconcile it with information in policy documents, from both the privacy and NLP communities."
],
[
"This research was supported in part by grants from the National Science Foundation Secure and Trustworthy Computing program (CNS-1330596, CNS-1330214, CNS-15-13957, CNS-1801316, CNS-1914486, CNS-1914444) and a DARPA Brandeis grant on Personalized Privacy Assistants (FA8750-15-2-0277). The US Government is authorized to reproduce and distribute reprints for Governmental purposes not withstanding any copyright notation. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the NSF, DARPA, or the US Government. The authors would like to extend their gratitude to Elias Wright, Gian Mascioli, Kiara Pillay, Harrison Kay, Eliel Talo, Alexander Fagella and N. Cameron Russell for providing their valuable expertise and insight to this effort. The authors are also grateful to Eduard Hovy, Lorrie Cranor, Florian Schaub, Joel Reidenberg, Aditya Potukuchi and Igor Shalyminov for helpful discussions related to this work, and to the three anonymous reviewers of this draft for their constructive feedback. Finally, the authors would like to thank all crowdworkers who consented to participate in this study."
]
],
"section_name": [
"Introduction",
"Related Work",
"Data Collection",
"Data Collection ::: Crowdsourced Question Elicitation",
"Data Collection ::: Answer Selection",
"Data Collection ::: Analysis",
"Data Collection ::: Analysis ::: Categories of Questions",
"Data Collection ::: Analysis ::: Answer Validation",
"Experimental Setup",
"Experimental Setup ::: Answerability Identification Baselines",
"Experimental Setup ::: Privacy Question Answering",
"Experimental Setup ::: Privacy Question Answering ::: Evaluation Metric",
"Experimental Setup ::: Privacy Question Answering ::: Baselines",
"Results and Discussion",
"Results and Discussion ::: Error Analysis",
"Results and Discussion ::: What makes Questions Unanswerable?",
"Conclusion",
"Acknowledgements"
]
} | {
"answers": [
{
"annotation_id": [
"987a26fe49aba0e7141b1033578cf7449bc8eaad"
],
"answer": [
{
"evidence": [
"Prior work has aimed to make privacy policies easier to understand. Prescriptive approaches towards communicating privacy information BIBREF21, BIBREF22, BIBREF23 have not been widely adopted by industry. Recently, there have been significant research effort devoted to understanding privacy policies by leveraging NLP techniques BIBREF24, BIBREF25, BIBREF26, BIBREF27, BIBREF28, especially by identifying specific data practices within a privacy policy. We adopt a personalized approach to understanding privacy policies, that allows users to query a document and selectively explore content salient to them. Most similar is the PolisisQA corpus BIBREF29, which examines questions users ask corporations on Twitter. Our approach differs in several ways: 1) The PrivacyQA dataset is larger, containing 10x as many questions and answers. 2) Answers are formulated by domain experts with legal training. 3) PrivacyQA includes diverse question types, including unanswerable and subjective questions."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
" 2) Answers are formulated by domain experts with legal training. "
],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"9cf96ca8b584b5de948019dc75e305c9e7707b92"
]
},
{
"annotation_id": [
"16b7a7d52c500f4c78745d63cd743fba95a0fc73"
],
"answer": [
{
"evidence": [
"Table.TABREF17 presents aggregate statistics of the PrivacyQA dataset. 1750 questions are posed to our imaginary privacy assistant over 35 mobile applications and their associated privacy documents. As an initial step, we formulate the problem of answering user questions as an extractive sentence selection task, ignoring for now background knowledge, statistical data and legal expertise that could otherwise be brought to bear. The dataset is partitioned into a training set featuring 27 mobile applications and 1350 questions, and a test set consisting of 400 questions over 8 policy documents. This ensures that documents in training and test splits are mutually exclusive. Every question is answered by at least one expert. In addition, in order to estimate annotation reliability and provide for better evaluation, every question in the test set is answered by at least two additional experts."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"In addition, in order to estimate annotation reliability and provide for better evaluation, every question in the test set is answered by at least two additional experts."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"9cf96ca8b584b5de948019dc75e305c9e7707b92"
]
},
{
"annotation_id": [
"b20114db1022ac03147bf2b96148767150672594",
"ed2fb4126dafd802564029926c441bb46a86555d"
],
"answer": [
{
"evidence": [
"To identify legally sound answers, we recruit seven experts with legal training to construct answers to Turker questions. Experts identify relevant evidence within the privacy policy, as well as provide meta-annotation on the question's relevance, subjectivity, OPP-115 category BIBREF49, and how likely any privacy policy is to contain the answer to the question asked."
],
"extractive_spans": [],
"free_form_answer": "Individuals with legal training",
"highlighted_evidence": [
"To identify legally sound answers, we recruit seven experts with legal training to construct answers to Turker questions. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Prior work has aimed to make privacy policies easier to understand. Prescriptive approaches towards communicating privacy information BIBREF21, BIBREF22, BIBREF23 have not been widely adopted by industry. Recently, there have been significant research effort devoted to understanding privacy policies by leveraging NLP techniques BIBREF24, BIBREF25, BIBREF26, BIBREF27, BIBREF28, especially by identifying specific data practices within a privacy policy. We adopt a personalized approach to understanding privacy policies, that allows users to query a document and selectively explore content salient to them. Most similar is the PolisisQA corpus BIBREF29, which examines questions users ask corporations on Twitter. Our approach differs in several ways: 1) The PrivacyQA dataset is larger, containing 10x as many questions and answers. 2) Answers are formulated by domain experts with legal training. 3) PrivacyQA includes diverse question types, including unanswerable and subjective questions."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"2) Answers are formulated by domain experts with legal training."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"9cf96ca8b584b5de948019dc75e305c9e7707b92"
]
},
{
"annotation_id": [
"62caef8748ffd79567b46b598be0734dca381b0a",
"bd2677b6b061c0822793308e5f1c8fa92d0d6f95"
],
"answer": [
{
"evidence": [
"BERT: We implement two BERT-based baselines BIBREF51 for evidence identification. First, we train BERT on each query-policy sentence pair as a binary classification task to identify if the sentence is evidence for the question or not (Bert). We also experiment with a two-stage classifier, where we separately train the model on questions only to predict answerability. At inference time, if the answerable classifier predicts the question is answerable, the evidence identification classifier produces a set of candidate sentences (Bert + Unanswerable)."
],
"extractive_spans": [
"Bert + Unanswerable"
],
"free_form_answer": "",
"highlighted_evidence": [
"BERT: We implement two BERT-based baselines BIBREF51 for evidence identification. First, we train BERT on each query-policy sentence pair as a binary classification task to identify if the sentence is evidence for the question or not (Bert). We also experiment with a two-stage classifier, where we separately train the model on questions only to predict answerability. At inference time, if the answerable classifier predicts the question is answerable, the evidence identification classifier produces a set of candidate sentences (Bert + Unanswerable).\n\n"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"CNN: We utilize a CNN neural encoder for answerability prediction. We use GloVe word embeddings BIBREF50, and a filter size of 5 with 64 filters to encode questions.",
"BERT: BERT BIBREF51 is a bidirectional transformer-based language-model BIBREF52. We fine-tune BERT-base on our binary answerability identification task with a learning rate of 2e-5 for 3 epochs, with a maximum sequence length of 128.",
"BERT: We implement two BERT-based baselines BIBREF51 for evidence identification. First, we train BERT on each query-policy sentence pair as a binary classification task to identify if the sentence is evidence for the question or not (Bert). We also experiment with a two-stage classifier, where we separately train the model on questions only to predict answerability. At inference time, if the answerable classifier predicts the question is answerable, the evidence identification classifier produces a set of candidate sentences (Bert + Unanswerable)."
],
"extractive_spans": [
"CNN",
"BERT"
],
"free_form_answer": "",
"highlighted_evidence": [
"CNN: We utilize a CNN neural encoder for answerability prediction. We use GloVe word embeddings BIBREF50, and a filter size of 5 with 64 filters to encode questions.",
"BERT: BERT BIBREF51 is a bidirectional transformer-based language-model BIBREF52. We fine-tune BERT-base on our binary answerability identification task with a learning rate of 2e-5 for 3 epochs, with a maximum sequence length of 128.",
"BERT: We implement two BERT-based baselines BIBREF51 for evidence identification."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"9cf96ca8b584b5de948019dc75e305c9e7707b92",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"7495bdbae5ee3954d1f9a521e7c8850ccaf15584",
"88991973c53d09992b0ffe5c04df79164c9f8363"
],
"answer": [
{
"evidence": [
"SVM: We define 3 sets of features to characterize each question. The first is a simple bag-of-words set of features over the question (SVM-BOW), the second is bag-of-words features of the question as well as length of the question in words (SVM-BOW + LEN), and lastly we extract bag-of-words features, length of the question in words as well as part-of-speech tags for the question (SVM-BOW + LEN + POS). This results in vectors of 200, 201 and 228 dimensions respectively, which are provided to an SVM with a linear kernel.",
"No-Answer Baseline (NA) : Most of the questions we receive are difficult to answer in a legally-sound way on the basis of information present in the privacy policy. We establish a simple baseline to quantify the effect of identifying every question as unanswerable.",
"Word Count Baseline : To quantify the effect of using simple lexical matching to answer the questions, we retrieve the top candidate policy sentences for each question using a word count baseline BIBREF53, which counts the number of question words that also appear in a sentence. We include the top 2, 3 and 5 candidates as baselines.",
"Human Performance: We pick each reference answer provided by an annotator, and compute the F1 with respect to the remaining references, as described in section 4.2.1. Each reference answer is treated as the prediction, and the remaining n-1 answers are treated as the gold reference. The average of the maximum F1 across all reference answers is computed as the human baseline."
],
"extractive_spans": [
"SVM",
"No-Answer Baseline (NA) ",
"Word Count Baseline",
"Human Performance"
],
"free_form_answer": "",
"highlighted_evidence": [
"SVM: We define 3 sets of features to characterize each question. The first is a simple bag-of-words set of features over the question (SVM-BOW), the second is bag-of-words features of the question as well as length of the question in words (SVM-BOW + LEN), and lastly we extract bag-of-words features, length of the question in words as well as part-of-speech tags for the question (SVM-BOW + LEN + POS). This results in vectors of 200, 201 and 228 dimensions respectively, which are provided to an SVM with a linear kernel.",
"No-Answer Baseline (NA) : Most of the questions we receive are difficult to answer in a legally-sound way on the basis of information present in the privacy policy. We establish a simple baseline to quantify the effect of identifying every question as unanswerable.",
"Word Count Baseline : To quantify the effect of using simple lexical matching to answer the questions, we retrieve the top candidate policy sentences for each question using a word count baseline BIBREF53, which counts the number of question words that also appear in a sentence. We include the top 2, 3 and 5 candidates as baselines.",
"Human Performance: We pick each reference answer provided by an annotator, and compute the F1 with respect to the remaining references, as described in section 4.2.1. Each reference answer is treated as the prediction, and the remaining n-1 answers are treated as the gold reference. The average of the maximum F1 across all reference answers is computed as the human baseline."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"No-Answer Baseline (NA) : Most of the questions we receive are difficult to answer in a legally-sound way on the basis of information present in the privacy policy. We establish a simple baseline to quantify the effect of identifying every question as unanswerable.",
"Word Count Baseline : To quantify the effect of using simple lexical matching to answer the questions, we retrieve the top candidate policy sentences for each question using a word count baseline BIBREF53, which counts the number of question words that also appear in a sentence. We include the top 2, 3 and 5 candidates as baselines.",
"Human Performance: We pick each reference answer provided by an annotator, and compute the F1 with respect to the remaining references, as described in section 4.2.1. Each reference answer is treated as the prediction, and the remaining n-1 answers are treated as the gold reference. The average of the maximum F1 across all reference answers is computed as the human baseline."
],
"extractive_spans": [
"No-Answer Baseline (NA)",
"Word Count Baseline",
"Human Performance"
],
"free_form_answer": "",
"highlighted_evidence": [
"No-Answer Baseline (NA) : Most of the questions we receive are difficult to answer in a legally-sound way on the basis of information present in the privacy policy. We establish a simple baseline to quantify the effect of identifying every question as unanswerable.\n\nWord Count Baseline : To quantify the effect of using simple lexical matching to answer the questions, we retrieve the top candidate policy sentences for each question using a word count baseline BIBREF53, which counts the number of question words that also appear in a sentence. We include the top 2, 3 and 5 candidates as baselines",
"Human Performance: We pick each reference answer provided by an annotator, and compute the F1 with respect to the remaining references, as described in section 4.2.1. Each reference answer is treated as the prediction, and the remaining n-1 answers are treated as the gold reference. The average of the maximum F1 across all reference answers is computed as the human baseline."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"9cf96ca8b584b5de948019dc75e305c9e7707b92"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no"
],
"question": [
"Are the experts comparable to real-world users?",
"Are the answers double (and not triple) annotated?",
"Who were the experts used for annotation?",
"What type of neural model was used?",
"Were other baselines tested to compare with the neural baseline?"
],
"question_id": [
"3c3807f226ba72fc41f59f0338f12a49a0c35605",
"c70bafc35e27be9d1efae60596bc0dd390c124c0",
"81d607fc206198162faa54a796717c2805282d9b",
"51fe4d44887c5cc5fc98b65ca4cb5876f0a56dad",
"f0848e7a339da0828278f6803ed7990366c975f0"
],
"question_writer": [
"2a18a3656984d04249f100633e4c1003417a2255",
"2a18a3656984d04249f100633e4c1003417a2255",
"2a18a3656984d04249f100633e4c1003417a2255",
"2a18a3656984d04249f100633e4c1003417a2255",
"2a18a3656984d04249f100633e4c1003417a2255"
],
"search_query": [
"expert question answering",
"expert question answering",
"expert question answering",
"expert question answering",
"expert question answering"
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 1: Examples of privacy-related questions users ask, drawn from two mobile applications: Keep1and Viber.2Policy evidence represents sentences in the privacy policy that are relevant for determining the answer to the user’s question.3",
"Table 1: Comparison of the PRIVACYQA dataset to other question answering datasets. Expert annotator indicates domain expertise of the answer provider. Simple evaluation indicates the presence of an automatically calculable evaluation metric. Unanswerable questions indicates if the respective corpus includes unanswerable questions. ‘Asker Cannot See Evidence’ indicates that the asker of the question was not shown evidence from the document at the time of formulating questions.",
"Figure 2: User interface for question elicitation.",
"Table 2: Ten most frequent first words in questions in the PRIVACYQA dataset. We observe high lexical diversity in prefixes with 35 unique first word types and 131 unique combinations of first and second words.",
"Table 3: OPP-115 categories most relevant to the questions collected from users.",
"Table 4: Statistics of the PRIVACYQA Dataset, where # denotes number of questions, policies and sentences, and average length of questions, policies and answers in words, for training and test partitions.",
"Table 5: Classifier Performance (%) for answerability of questions. The Majority Class baseline always predicts that questions are unanswerable.",
"Table 7: Stratification of classifier performance by OPP-115 category of questions.",
"Table 6: Performance of baselines on PRIVACYQA dataset.",
"Table 8: Analysis of BERT performance at identifying answerability. The majority of mistakes made by BERT are answerable Questions identified as unanswerable. These answerable questions are further analyzed along the factors of scope, subjectivity, presence of answer and whether the question could be anticipated."
],
"file": [
"1-Figure1-1.png",
"3-Table1-1.png",
"3-Figure2-1.png",
"4-Table2-1.png",
"5-Table3-1.png",
"5-Table4-1.png",
"6-Table5-1.png",
"6-Table7-1.png",
"6-Table6-1.png",
"6-Table8-1.png"
]
} | [
"Who were the experts used for annotation?"
] | [
[
"1911.00841-Related Work-0",
"1911.00841-Data Collection ::: Answer Selection-0"
]
] | [
"Individuals with legal training"
] | 211 |
1605.03481 | Tweet2Vec: Character-Based Distributed Representations for Social Media | Text from social media provides a set of challenges that can cause traditional NLP approaches to fail. Informal language, spelling errors, abbreviations, and special characters are all commonplace in these posts, leading to a prohibitively large vocabulary size for word-level approaches. We propose a character composition model, tweet2vec, which finds vector-space representations of whole tweets by learning complex, non-local dependencies in character sequences. The proposed model outperforms a word-level baseline at predicting user-annotated hashtags associated with the posts, doing significantly better when the input contains many out-of-vocabulary words or unusual character sequences. Our tweet2vec encoder is publicly available. | {
"paragraphs": [
[
"We understand from Zipf's Law that in any natural language corpus a majority of the vocabulary word types will either be absent or occur in low frequency. Estimating the statistical properties of these rare word types is naturally a difficult task. This is analogous to the curse of dimensionality when we deal with sequences of tokens - most sequences will occur only once in the training data. Neural network architectures overcome this problem by defining non-linear compositional models over vector space representations of tokens and hence assign non-zero probability even to sequences not seen during training BIBREF0 , BIBREF1 . In this work, we explore a similar approach to learning distributed representations of social media posts by composing them from their constituent characters, with the goal of generalizing to out-of-vocabulary words as well as sequences at test time.",
"Traditional Neural Network Language Models (NNLMs) treat words as the basic units of language and assign independent vectors to each word type. To constrain memory requirements, the vocabulary size is fixed before-hand; therefore, rare and out-of-vocabulary words are all grouped together under a common type `UNKNOWN'. This choice is motivated by the assumption of arbitrariness in language, which means that surface forms of words have little to do with their semantic roles. Recently, BIBREF2 challenge this assumption and present a bidirectional Long Short Term Memory (LSTM) BIBREF3 for composing word vectors from their constituent characters which can memorize the arbitrary aspects of word orthography as well as generalize to rare and out-of-vocabulary words.",
"Encouraged by their findings, we extend their approach to a much larger unicode character set, and model long sequences of text as functions of their constituent characters (including white-space). We focus on social media posts from the website Twitter, which are an excellent testing ground for character based models due to the noisy nature of text. Heavy use of slang and abundant misspellings means that there are many orthographically and semantically similar tokens, and special characters such as emojis are also immensely popular and carry useful semantic information. In our moderately sized training dataset of 2 million tweets, there were about 0.92 million unique word types. It would be expensive to capture all these phenomena in a word based model in terms of both the memory requirement (for the increased vocabulary) and the amount of training data required for effective learning. Additional benefits of the character based approach include language independence of the methods, and no requirement of NLP preprocessing such as word-segmentation.",
"A crucial step in learning good text representations is to choose an appropriate objective function to optimize. Unsupervised approaches attempt to reconstruct the original text from its latent representation BIBREF4 , BIBREF0 . Social media posts however, come with their own form of supervision annotated by millions of users, in the form of hashtags which link posts about the same topic together. A natural assumption is that the posts with the same hashtags should have embeddings which are close to each other. Hence, we formulate our training objective to maximize cross-entropy loss at the task of predicting hashtags for a post from its latent representation.",
"We propose a Bi-directional Gated Recurrent Unit (Bi-GRU) BIBREF5 neural network for learning tweet representations. Treating white-space as a special character itself, the model does a forward and backward pass over the entire sequence, and the final GRU states are linearly combined to get the tweet embedding. Posterior probabilities over hashtags are computed by projecting this embedding to a softmax output layer. Compared to a word-level baseline this model shows improved performance at predicting hashtags for a held-out set of posts. Inspired by recent work in learning vector space text representations, we name our model tweet2vec."
],
[
"Using neural networks to learn distributed representations of words dates back to BIBREF0 . More recently, BIBREF4 released word2vec - a collection of word vectors trained using a recurrent neural network. These word vectors are in widespread use in the NLP community, and the original work has since been extended to sentences BIBREF1 , documents and paragraphs BIBREF6 , topics BIBREF7 and queries BIBREF8 . All these methods require storing an extremely large table of vectors for all word types and cannot be easily generalized to unseen words at test time BIBREF2 . They also require preprocessing to find word boundaries which is non-trivial for a social network domain like Twitter.",
"In BIBREF2 , the authors present a compositional character model based on bidirectional LSTMs as a potential solution to these problems. A major benefit of this approach is that large word lookup tables can be compacted into character lookup tables and the compositional model scales to large data sets better than other state-of-the-art approaches. While BIBREF2 generate word embeddings from character representations, we propose to generate vector representations of entire tweets from characters in our tweet2vec model.",
"Our work adds to the growing body of work showing the applicability of character models for a variety of NLP tasks such as Named Entity Recognition BIBREF9 , POS tagging BIBREF10 , text classification BIBREF11 and language modeling BIBREF12 , BIBREF13 .",
"Previously, BIBREF14 dealt with the problem of estimating rare word representations by building them from their constituent morphemes. While they show improved performance over word-based models, their approach requires a morpheme parser for preprocessing which may not perform well on noisy text like Twitter. Also the space of all morphemes, though smaller than the space of all words, is still large enough that modelling all morphemes is impractical.",
"Hashtag prediction for social media has been addressed earlier, for example in BIBREF15 , BIBREF16 . BIBREF15 also use a neural architecture, but compose text embeddings from a lookup table of words. They also show that the learned embeddings can generalize to an unrelated task of document recommendation, justifying the use of hashtags as supervision for learning text representations."
],
[
"Bi-GRU Encoder: Figure 1 shows our model for encoding tweets. It uses a similar structure to the C2W model in BIBREF2 , with LSTM units replaced with GRU units.",
"The input to the network is defined by an alphabet of characters $C$ (this may include the entire unicode character set). The input tweet is broken into a stream of characters $c_1, c_2, ... c_m$ each of which is represented by a 1-by- $|C|$ encoding. These one-hot vectors are then projected to a character space by multiplying with the matrix $P_C \\in \\mathbb {R}^{|C| \\times d_c}$ , where $d_c$ is the dimension of the character vector space. Let $x_1, x_2, ... x_m$ be the sequence of character vectors for the input tweet after the lookup. The encoder consists of a forward-GRU and a backward-GRU. Both have the same architecture, except the backward-GRU processes the sequence in reverse order. Each of the GRU units process these vectors sequentially, and starting with the initial state $h_0$ compute the sequence $h_1, h_2, ... h_m$ as follows: $\nr_t &= \\sigma (W_r x_t + U_r h_{t-1} + b_r), \\\\\nz_t &= \\sigma (W_z x_t + U_z h_{t-1} + b_z), \\\\\n\\tilde{h}_t &= tanh(W_h x_t + U_h (r_t \\odot h_{t-1}) + b_h), \\\\\nh_t &= (1-z_t) \\odot h_{t-1} + z_t \\odot \\tilde{h}_t.\n$ ",
"Here $r_t$ , $z_t$ are called the reset and update gates respectively, and $\\tilde{h}_t$ is the candidate output state which is converted to the actual output state $h_t$ . $W_r, W_z, W_h$ are $d_h \\times d_c$ matrices and $U_r, U_z, U_h$ are $d_h \\times d_h$ matrices, where $d_h$ is the hidden state dimension of the GRU. The final states $h_m^f$ from the forward-GRU, and $z_t$0 from the backward GRU are combined using a fully-connected layer to the give the final tweet embedding $z_t$1 : ",
"$$e_t = W^f h_m^f + W^b h_0^b$$ (Eq. 3) ",
"Here $W^f, W^b$ are $d_t \\times d_h$ and $b$ is $d_t \\times 1$ bias term, where $d_t$ is the dimension of the final tweet embedding. In our experiments we set $d_t=d_h$ . All parameters are learned using gradient descent.",
"Softmax: Finally, the tweet embedding is passed through a linear layer whose output is the same size as the number of hashtags $L$ in the data set. We use a softmax layer to compute the posterior hashtag probabilities: ",
"$$P(y=j |e) = \\frac{exp(w_j^Te + b_j)}{\\sum _{i=1}^L exp(w_i^Te + b_j)}.$$ (Eq. 4) ",
"Objective Function: We optimize the categorical cross-entropy loss between predicted and true hashtags: ",
"$$J = \\frac{1}{B} \\sum _{i=1}^{B} \\sum _{j=1}^{L} -t_{i,j}log(p_{i,j}) + \\lambda \\Vert \\Theta \\Vert ^2.$$ (Eq. 5) ",
"Here $B$ is the batch size, $L$ is the number of classes, $p_{i,j}$ is the predicted probability that the $i$ -th tweet has hashtag $j$ , and $t_{i,j} \\in \\lbrace 0,1\\rbrace $ denotes the ground truth of whether the $j$ -th hashtag is in the $i$ -th tweet. We use L2-regularization weighted by $\\lambda $ ."
],
[
"Since our objective is to compare character-based and word-based approaches, we have also implemented a simple word-level encoder for tweets. The input tweet is first split into tokens along white-spaces. A more sophisticated tokenizer may be used, but for a fair comparison we wanted to keep language specific preprocessing to a minimum. The encoder is essentially the same as tweet2vec, with the input as words instead of characters. A lookup table stores word vectors for the $V$ (20K here) most common words, and the rest are grouped together under the `UNK' token."
],
[
"Our dataset consists of a large collection of global posts from Twitter between the dates of June 1, 2013 to June 5, 2013. Only English language posts (as detected by the lang field in Twitter API) and posts with at least one hashtag are retained. We removed infrequent hashtags ( $<500$ posts) since they do not have enough data for good generalization. We also removed very frequent tags ( $>19K$ posts) which were almost always from automatically generated posts (ex: #androidgame) which are trivial to predict. The final dataset contains 2 million tweets for training, 10K for validation and 50K for testing, with a total of 2039 distinct hashtags. We use simple regex to preprocess the post text and remove hashtags (since these are to be predicted) and HTML tags, and replace user-names and URLs with special tokens. We also removed retweets and convert the text to lower-case."
],
[
"Word vectors and character vectors are both set to size $d_L=150$ for their respective models. There were 2829 unique characters in the training set and we model each of these independently in a character look-up table. Embedding sizes were chosen such that each model had roughly the same number of parameters (Table 2 ). Training is performed using mini-batch gradient descent with Nesterov's momentum. We use a batch size $B=64$ , initial learning rate $\\eta _0=0.01$ and momentum parameter $\\mu _0=0.9$ . L2-regularization with $\\lambda =0.001$ was applied to all models. Initial weights were drawn from 0-mean gaussians with $\\sigma =0.1$ and initial biases were set to 0. The hyperparameters were tuned one at a time keeping others fixed, and values with the lowest validation cost were chosen. The resultant combination was used to train the models until performance on validation set stopped increasing. During training, the learning rate is halved everytime the validation set precision increases by less than 0.01 % from one epoch to the next. The models converge in about 20 epochs. Code for training both the models is publicly available on github."
],
[
"We test the character and word-level variants by predicting hashtags for a held-out test set of posts. Since there may be more than one correct hashtag per post, we generate a ranked list of tags for each post from the output posteriors, and report average precision@1, recall@10 and mean rank of the correct hashtags. These are listed in Table 3 .",
"To see the performance of each model on posts containing rare words (RW) and frequent words (FW) we selected two test sets each containing 2,000 posts. We populated these sets with posts which had the maximum and minimum number of out-of-vocabulary words respectively, where vocabulary is defined by the 20K most frequent words. Overall, tweet2vec outperforms the word model, doing significantly better on RW test set and comparably on FW set. This improved performance comes at the cost of increased training time (see Table 2 ), since moving from words to characters results in longer input sequences to the GRU.",
"We also study the effect of model size on the performance of these models. For the word model we set vocabulary size $V$ to 8K, 15K and 20K respectively. For tweet2vec we set the GRU hidden state size to 300, 400 and 500 respectively. Figure 2 shows precision 1 of the two models as the number of parameters is increased, for each test set described above. There is not much variation in the performance, and moreover tweet2vec always outperforms the word based model for the same number of parameters.",
"Table 4 compares the models as complexity of the task is increased. We created 3 datasets (small, medium and large) with an increasing number of hashtags to be predicted. This was done by varying the lower threshold of the minimum number of tags per post for it to be included in the dataset. Once again we observe that tweet2vec outperforms its word-based counterpart for each of the three settings.",
"Finally, table 1 shows some predictions from the word level model and tweet2vec. We selected these to highlight some strengths of the character based approach - it is robust to word segmentation errors and spelling mistakes, effectively interprets emojis and other special characters to make predictions, and also performs comparably to the word-based approach for in-vocabulary tokens."
],
[
"We have presented tweet2vec - a character level encoder for social media posts trained using supervision from associated hashtags. Our result shows that tweet2vec outperforms the word based approach, doing significantly better when the input post contains many rare words. We have focused only on English language posts, but the character model requires no language specific preprocessing and can be extended to other languages. For future work, one natural extension would be to use a character-level decoder for predicting the hashtags. This will allow generation of hashtags not seen in the training dataset. Also, it will be interesting to see how our tweet2vec embeddings can be used in domains where there is a need for semantic understanding of social media, such as tracking infectious diseases BIBREF17 . Hence, we provide an off-the-shelf encoder trained on medium dataset described above to compute vector-space representations of tweets along with our code on github."
],
[
"We would like to thank Alex Smola, Yun Fu, Hsiao-Yu Fish Tung, Ruslan Salakhutdinov, and Barnabas Poczos for useful discussions. We would also like to thank Juergen Pfeffer for providing access to the Twitter data, and the reviewers for their comments."
]
],
"section_name": [
"Introduction",
"Related Work",
"Tweet2Vec",
"Word Level Baseline",
"Data",
"Implementation Details",
"Results",
"Conclusion",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"79d5491008df7779d31fd8e26286ee88bcc4dc08"
],
"answer": [
{
"evidence": [
"Encouraged by their findings, we extend their approach to a much larger unicode character set, and model long sequences of text as functions of their constituent characters (including white-space). We focus on social media posts from the website Twitter, which are an excellent testing ground for character based models due to the noisy nature of text. Heavy use of slang and abundant misspellings means that there are many orthographically and semantically similar tokens, and special characters such as emojis are also immensely popular and carry useful semantic information. In our moderately sized training dataset of 2 million tweets, there were about 0.92 million unique word types. It would be expensive to capture all these phenomena in a word based model in terms of both the memory requirement (for the increased vocabulary) and the amount of training data required for effective learning. Additional benefits of the character based approach include language independence of the methods, and no requirement of NLP preprocessing such as word-segmentation."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"We focus on social media posts from the website Twitter, which are an excellent testing ground for character based models due to the noisy nature of text. Heavy use of slang and abundant misspellings means that there are many orthographically and semantically similar tokens, and special characters such as emojis are also immensely popular and carry useful semantic information. In our moderately sized training dataset of 2 million tweets, there were about 0.92 million unique word types."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"08f81a5d78e451df16193028defb70150c4201c9"
]
},
{
"annotation_id": [
"1735c34933a8214f841762747a582b5b7584dd36",
"536813de8a2afdead639fed1ac771d35d045d23f"
],
"answer": [
{
"evidence": [
"Hashtag prediction for social media has been addressed earlier, for example in BIBREF15 , BIBREF16 . BIBREF15 also use a neural architecture, but compose text embeddings from a lookup table of words. They also show that the learned embeddings can generalize to an unrelated task of document recommendation, justifying the use of hashtags as supervision for learning text representations."
],
"extractive_spans": [],
"free_form_answer": "established task",
"highlighted_evidence": [
"Hashtag prediction for social media has been addressed earlier, for example in BIBREF15 , BIBREF16 . BIBREF15 also use a neural architecture, but compose text embeddings from a lookup table of words."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Hashtag prediction for social media has been addressed earlier, for example in BIBREF15 , BIBREF16 . BIBREF15 also use a neural architecture, but compose text embeddings from a lookup table of words. They also show that the learned embeddings can generalize to an unrelated task of document recommendation, justifying the use of hashtags as supervision for learning text representations."
],
"extractive_spans": [
"Hashtag prediction for social media has been addressed earlier"
],
"free_form_answer": "",
"highlighted_evidence": [
"Hashtag prediction for social media has been addressed earlier, for example in BIBREF15 , BIBREF16 . BIBREF15 also use a neural architecture, but compose text embeddings from a lookup table of words."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5d0eb97e8e840e171f73b7642c2c89dd3984157b",
"08f81a5d78e451df16193028defb70150c4201c9"
]
},
{
"annotation_id": [
"703500097a45acc0e3fe9e2ad85705edd43f1422",
"fd4fba152b1da6491019c5d5fd4f4e4ff073313b"
],
"answer": [
{
"evidence": [
"Since our objective is to compare character-based and word-based approaches, we have also implemented a simple word-level encoder for tweets. The input tweet is first split into tokens along white-spaces. A more sophisticated tokenizer may be used, but for a fair comparison we wanted to keep language specific preprocessing to a minimum. The encoder is essentially the same as tweet2vec, with the input as words instead of characters. A lookup table stores word vectors for the $V$ (20K here) most common words, and the rest are grouped together under the `UNK' token."
],
"extractive_spans": [
"a simple word-level encoder",
"The encoder is essentially the same as tweet2vec, with the input as words instead of characters."
],
"free_form_answer": "",
"highlighted_evidence": [
"Since our objective is to compare character-based and word-based approaches, we have also implemented a simple word-level encoder for tweets. The input tweet is first split into tokens along white-spaces. A more sophisticated tokenizer may be used, but for a fair comparison we wanted to keep language specific preprocessing to a minimum. The encoder is essentially the same as tweet2vec, with the input as words instead of characters. A lookup table stores word vectors for the $V$ (20K here) most common words, and the rest are grouped together under the `UNK' token."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Since our objective is to compare character-based and word-based approaches, we have also implemented a simple word-level encoder for tweets. The input tweet is first split into tokens along white-spaces. A more sophisticated tokenizer may be used, but for a fair comparison we wanted to keep language specific preprocessing to a minimum. The encoder is essentially the same as tweet2vec, with the input as words instead of characters. A lookup table stores word vectors for the $V$ (20K here) most common words, and the rest are grouped together under the `UNK' token."
],
"extractive_spans": [
"The encoder is essentially the same as tweet2vec, with the input as words instead of characters."
],
"free_form_answer": "",
"highlighted_evidence": [
"The encoder is essentially the same as tweet2vec, with the input as words instead of characters."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5d0eb97e8e840e171f73b7642c2c89dd3984157b",
"08f81a5d78e451df16193028defb70150c4201c9"
]
},
{
"annotation_id": [
"33bfd6cfadf0cffbb193aeab89867b8bc73d94b3"
],
"answer": [
{
"evidence": [
"We test the character and word-level variants by predicting hashtags for a held-out test set of posts. Since there may be more than one correct hashtag per post, we generate a ranked list of tags for each post from the output posteriors, and report average precision@1, recall@10 and mean rank of the correct hashtags. These are listed in Table 3 ."
],
"extractive_spans": [],
"free_form_answer": "None",
"highlighted_evidence": [
"We test the character and word-level variants by predicting hashtags for a held-out test set of posts."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"08f81a5d78e451df16193028defb70150c4201c9"
]
},
{
"annotation_id": [
"8ecf86e5456fa469c761a434788da56e40d1d1c1",
"fd61f88ea325f093c47d50abe8a9196a3fc8b480"
],
"answer": [
{
"evidence": [
"Since our objective is to compare character-based and word-based approaches, we have also implemented a simple word-level encoder for tweets. The input tweet is first split into tokens along white-spaces. A more sophisticated tokenizer may be used, but for a fair comparison we wanted to keep language specific preprocessing to a minimum. The encoder is essentially the same as tweet2vec, with the input as words instead of characters. A lookup table stores word vectors for the $V$ (20K here) most common words, and the rest are grouped together under the `UNK' token."
],
"extractive_spans": [
"a simple word-level encoder",
"with the input as words instead of characters"
],
"free_form_answer": "",
"highlighted_evidence": [
"Since our objective is to compare character-based and word-based approaches, we have also implemented a simple word-level encoder for tweets. The input tweet is first split into tokens along white-spaces. A more sophisticated tokenizer may be used, but for a fair comparison we wanted to keep language specific preprocessing to a minimum. The encoder is essentially the same as tweet2vec, with the input as words instead of characters. A lookup table stores word vectors for the $V$ (20K here) most common words, and the rest are grouped together under the `UNK' token."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Since our objective is to compare character-based and word-based approaches, we have also implemented a simple word-level encoder for tweets. The input tweet is first split into tokens along white-spaces. A more sophisticated tokenizer may be used, but for a fair comparison we wanted to keep language specific preprocessing to a minimum. The encoder is essentially the same as tweet2vec, with the input as words instead of characters. A lookup table stores word vectors for the $V$ (20K here) most common words, and the rest are grouped together under the `UNK' token."
],
"extractive_spans": [
"The encoder is essentially the same as tweet2vec, with the input as words instead of characters"
],
"free_form_answer": "",
"highlighted_evidence": [
"The encoder is essentially the same as tweet2vec, with the input as words instead of characters. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5d0eb97e8e840e171f73b7642c2c89dd3984157b",
"08f81a5d78e451df16193028defb70150c4201c9"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity",
"infinity",
"two"
],
"paper_read": [
"no",
"no",
"no",
"no",
"yes"
],
"question": [
"Does the paper clearly establish that the challenges listed here exist in this dataset and task?",
"Is this hashtag prediction task an established task, or something new?",
"What is the word-level baseline?",
"What other tasks do they test their method on?",
"what is the word level baseline they compare to?"
],
"question_id": [
"b85fc420eb2f77f6f14f375cc1fcc5155eb5c0a8",
"792f6d76d2befba2af07198584aac1b189583ae4",
"127d5ddfabec5c58832e5865cbd8ed0978c25a13",
"b91671715ad4fad56c67c28ce6f29e180fe08595",
"a6d37b5975050da0b1959232ae756fc09e5f87e8"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"social",
"social",
"social",
"social",
"social"
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: Tweet2Vec encoder for social media text",
"Table 1: Examples of top predictions from the models. The correct hashtag(s) if detected are in bold.",
"Table 2: Model sizes and training time/epoch",
"Table 3: Hashtag prediction results. Best numbers for each test set are in bold.",
"Figure 2: Precision @1 v Number of model parameters for word model and tweet2vec.",
"Table 4: Precision @1 as training data size and number of output labels is increased. Note that the test set is different for each setting."
],
"file": [
"3-Figure1-1.png",
"4-Table1-1.png",
"4-Table2-1.png",
"4-Table3-1.png",
"5-Figure2-1.png",
"5-Table4-1.png"
]
} | [
"Is this hashtag prediction task an established task, or something new?",
"What other tasks do they test their method on?"
] | [
[
"1605.03481-Related Work-4"
],
[
"1605.03481-Results-0"
]
] | [
"established task",
"None"
] | 212 |
1908.07245 | GlossBERT: BERT for Word Sense Disambiguation with Gloss Knowledge | Word Sense Disambiguation (WSD) aims to find the exact sense of an ambiguous word in a particular context. Traditional supervised methods rarely take into consideration the lexical resources like WordNet, which are widely utilized in knowledge-based methods. Recent studies have shown the effectiveness of incorporating gloss (sense definition) into neural networks for WSD. However, compared with traditional word expert supervised methods, they have not achieved much improvement. In this paper, we focus on how to better leverage gloss knowledge in a supervised neural WSD system. We construct context-gloss pairs and propose three BERT-based models for WSD. We fine-tune the pre-trained BERT model on SemCor3.0 training corpus and the experimental results on several English all-words WSD benchmark datasets show that our approach outperforms the state-of-the-art systems. | {
"paragraphs": [
[
"Word Sense Disambiguation (WSD) is a fundamental task and long-standing challenge in Natural Language Processing (NLP), which aims to find the exact sense of an ambiguous word in a particular context BIBREF0. Previous WSD approaches can be grouped into two main categories: knowledge-based and supervised methods.",
"Knowledge-based WSD methods rely on lexical resources like WordNet BIBREF1 and usually exploit two kinds of lexical knowledge. The gloss, which defines a word sense meaning, is first utilized in Lesk algorithm BIBREF2 and then widely taken into account in many other approaches BIBREF3, BIBREF4. Besides, structural properties of semantic graphs are mainly used in graph-based algorithms BIBREF5, BIBREF6.",
"Traditional supervised WSD methods BIBREF7, BIBREF8, BIBREF9 focus on extracting manually designed features and then train a dedicated classifier (word expert) for every target lemma.",
"Although word expert supervised WSD methods perform better, they are less flexible than knowledge-based methods in the all-words WSD task BIBREF10. Recent neural-based methods are devoted to dealing with this problem. BIBREF11 present a supervised classifier based on Bi-LSTM, which shares parameters among all word types except the last layer. BIBREF10 convert WSD task to a sequence labeling task, thus building a unified model for all polysemous words. However, neither of them can totally beat the best word expert supervised methods.",
"More recently, BIBREF12 propose to leverage the gloss information from WordNet and model the semantic relationship between the context and gloss in an improved memory network. Similarly, BIBREF13 introduce a (hierarchical) co-attention mechanism to generate co-dependent representations for the context and gloss. Their attempts prove that incorporating gloss knowledge into supervised WSD approach is helpful, but they still have not achieved much improvement, because they may not make full use of gloss knowledge.",
"In this paper, we focus on how to better leverage gloss information in a supervised neural WSD system. Recently, the pre-trained language models, such as ELMo BIBREF14 and BERT BIBREF15, have shown their effectiveness to alleviate the effort of feature engineering. Especially, BERT has achieved excellent results in question answering (QA) and natural language inference (NLI). We construct context-gloss pairs from glosses of all possible senses (in WordNet) of the target word, thus treating WSD task as a sentence-pair classification problem. We fine-tune the pre-trained BERT model and achieve new state-of-the-art results on WSD task. In particular, our contribution is two-fold:",
"1. We construct context-gloss pairs and propose three BERT-based models for WSD.",
"2. We fine-tune the pre-trained BERT model, and the experimental results on several English all-words WSD benchmark datasets show that our approach significantly outperforms the state-of-the-art systems."
],
[
"In this section, we describe our method in detail."
],
[
"In WSD, a sentence $s$ usually consists of a series of words: $\\lbrace w_1,\\cdots ,w_m\\rbrace $, and some of the words $\\lbrace w_{i_1},\\cdots ,w_{i_k}\\rbrace $ are targets $\\lbrace t_1,\\cdots ,t_k\\rbrace $ need to be disambiguated. For each target $t$, its candidate senses $\\lbrace c_1,\\cdots ,c_n\\rbrace $ come from entries of its lemma in a pre-defined sense inventory (usually WordNet). Therefore, WSD task aims to find the most suitable entry (symbolized as unique sense key) for each target in a sentence. See a sentence example in Table TABREF1."
],
[
"BERT BIBREF15 is a new language representation model, and its architecture is a multi-layer bidirectional Transformer encoder. BERT model is pre-trained on a large corpus and two novel unsupervised prediction tasks, i.e., masked language model and next sentence prediction tasks are used in pre-training. When incorporating BERT into downstream tasks, the fine-tuning procedure is recommended. We fine-tune the pre-trained BERT model on WSD task."
],
[
"Since every target in a sentence needs to be disambiguated to find its exact sense, WSD task can be regarded as a token-level classification task. To incorporate BERT to WSD task, we take the final hidden state of the token corresponding to the target word (if more than one token, we average them) and add a classification layer for every target lemma, which is the same as the last layer of the Bi-LSTM model BIBREF11."
],
[
"BERT can explicitly model the relationship of a pair of texts, which has shown to be beneficial to many pair-wise natural language understanding tasks. In order to fully leverage gloss information, we propose GlossBERT to construct context-gloss pairs from all possible senses of the target word in WordNet, thus treating WSD task as a sentence-pair classification problem.",
"We describe our construction method with an example (See Table TABREF1). There are four targets in this sentence, and here we take target word research as an example:"
],
[
"The sentence containing target words is denoted as context sentence. For each target word, we extract glosses of all $N$ possible senses (here $N=4$) of the target word (research) in WordNet to obtain the gloss sentence. [CLS] and [SEP] marks are added to the context-gloss pairs to make it suitable for the input of BERT model. A similar idea is also used in aspect-based sentiment analysis BIBREF16."
],
[
"Based on the previous construction method, we add weak supervised signals to the context-gloss pairs (see the highlighted part in Table TABREF1). The signal in the gloss sentence aims to point out the target word, and the signal in the context sentence aims to emphasize the target word considering the situation that a target word may occur more than one time in the same sentence.",
"Therefore, each target word has $N$ context-gloss pair training instances ($label\\in \\lbrace yes, no\\rbrace $). When testing, we output the probability of $label=yes$ of each context-gloss pair and choose the sense corresponding to the highest probability as the prediction label of the target word. We experiment with three GlossBERT models:"
],
[
"We use context-gloss pairs as input. We highlight the target word by taking the final hidden state of the token corresponding to the target word (if more than one token, we average them) and add a classification layer ($label\\in \\lbrace yes, no\\rbrace $)."
],
[
"We use context-gloss pairs as input. We take the final hidden state of the first token [CLS] as the representation of the whole sequence and add a classification layer ($label\\in \\lbrace yes, no\\rbrace $), which does not highlight the target word."
],
[
"We use context-gloss pairs with weak supervision as input. We take the final hidden state of the first token [CLS] and add a classification layer ($label\\in \\lbrace yes, no\\rbrace $), which weekly highlight the target word by the weak supervision."
],
[
"The statistics of the WSD datasets are shown in Table TABREF12."
],
[
"Following previous work BIBREF13, BIBREF12, BIBREF10, BIBREF17, BIBREF9, BIBREF7, we choose SemCor3.0 as training corpus, which is the largest corpus manually annotated with WordNet sense for WSD."
],
[
"We evaluate our method on several English all-words WSD datasets. For a fair comparison, we use the benchmark datasets proposed by BIBREF17 which include five standard all-words fine-grained WSD datasets from the Senseval and SemEval competitions: Senseval-2 (SE2), Senseval-3 (SE3), SemEval-2007 (SE07), SemEval-2013 (SE13) and SemEval-2015 (SE15). Following BIBREF13, BIBREF12 and BIBREF10, we choose SE07, the smallest among these test sets, as the development set."
],
[
"Since BIBREF17 map all the sense annotations in these datasets from their original versions to WordNet 3.0, we extract word sense glosses from WordNet 3.0."
],
[
"We use the pre-trained uncased BERT$_\\mathrm {BASE}$ model for fine-tuning, because we find that BERT$_\\mathrm {LARGE}$ model performs slightly worse than BERT$_\\mathrm {BASE}$ in this task. The number of Transformer blocks is 12, the number of the hidden layer is 768, the number of self-attention heads is 12, and the total number of parameters of the pre-trained model is 110M. When fine-tuning, we use the development set (SE07) to find the optimal settings for our experiments. We keep the dropout probability at 0.1, set the number of epochs to 4. The initial learning rate is 2e-5, and the batch size is 64."
],
[
"Table TABREF19 shows the performance of our method on the English all-words WSD benchmark datasets. We compare our approach with previous methods.",
"The first block shows the MFS baseline, which selects the most frequent sense in the training corpus for each target word.",
"The second block shows two knowledge-based systems. Lesk$_{ext+emb}$ BIBREF4 is a variant of Lesk algorithm BIBREF2 by calculating the gloss-context overlap of the target word. Babelfy BIBREF6 is a unified graph-based approach which exploits the semantic network structure from BabelNet.",
"The third block shows two word expert traditional supervised systems. IMS BIBREF7 is a flexible framework which trains SVM classifiers and uses local features. And IMS$_{+emb}$ BIBREF9 is the best configuration of the IMS framework, which also integrates word embeddings as features.",
"The fourth block shows several recent neural-based methods. Bi-LSTM BIBREF11 is a baseline for neural models. Bi-LSTM$_{+ att. + LEX + POS}$ BIBREF10 is a multi-task learning framework for WSD, POS tagging, and LEX with self-attention mechanism, which converts WSD to a sequence learning task. GAS$_{ext}$ BIBREF12 is a variant of GAS which is a gloss-augmented variant of the memory network by extending gloss knowledge. CAN$^s$ and HCAN BIBREF13 are sentence-level and hierarchical co-attention neural network models which leverage gloss knowledge.",
"In the last block, we report the performance of our method. BERT(Token-CLS) is our baseline, which does not incorporate gloss information, and it performs slightly worse than previous traditional supervised methods and recent neural-based methods. It proves that directly using BERT cannot obtain performance growth. The other three methods outperform other models by a substantial margin, which proves that the improvements come from leveraging BERT to better exploit gloss information. It is worth noting that our method achieves significant improvements in SE07 and Verb over previous methods, which have the highest ambiguity level among all datasets and all POS tags respectively according to BIBREF17.",
"Moreover, GlossBERT(Token-CLS) performs better than GlossBERT(Sent-CLS), which proves that highlighting the target word in the sentence is important. However, the weakly highlighting method GlossBERT(Sent-CLS-WS) performs best in most circumstances, which may result from its combination of the advantages of the other two methods."
],
[
"There are two main reasons for the great improvements of our experimental results. First, we construct context-gloss pairs and convert WSD problem to a sentence-pair classification task which is similar to NLI tasks and train only one classifier, which is equivalent to expanding the corpus. Second, we leverage BERT BIBREF15 to better exploit the gloss information. BERT model shows its advantage in dealing with sentence-pair classification tasks by its amazing improvement on QA and NLI tasks. This advantage comes from both of its two novel unsupervised prediction tasks.",
"Compared with traditional word expert supervised methods, our GlossBERT shows its effectiveness to alleviate the effort of feature engineering and does not require training a dedicated classifier for every target lemma. Up to now, it can be said that the neural network method can totally beat the traditional word expert method. Compared with recent neural-based methods, our solution is more intuitive and can make better use of gloss knowledge. Besides, our approach demonstrates that when we fine-tune BERT on a downstream task, converting it into a sentence-pair classification task may be a good choice."
],
[
"In this paper, we seek to better leverage gloss knowledge in a supervised neural WSD system. We propose a new solution to WSD by constructing context-gloss pairs and then converting WSD to a sentence-pair classification task. We fine-tune the pre-trained BERT model and achieve new state-of-the-art results on WSD task."
],
[
"We would like to thank the anonymous reviewers for their valuable comments. The research work is supported by National Natural Science Foundation of China (No. 61751201 and 61672162), Shanghai Municipal Science and Technology Commission (16JC1420401 and 17JC1404100), Shanghai Municipal Science and Technology Major Project (No.2018SHZDZX01) and ZJLab."
]
],
"section_name": [
"Introduction",
"Methodology",
"Methodology ::: Task Definition",
"Methodology ::: BERT",
"Methodology ::: BERT ::: BERT(Token-CLS)",
"Methodology ::: GlossBERT",
"Methodology ::: GlossBERT ::: Context-Gloss Pairs",
"Methodology ::: GlossBERT ::: Context-Gloss Pairs with Weak Supervision",
"Methodology ::: GlossBERT ::: GlossBERT(Token-CLS)",
"Methodology ::: GlossBERT ::: GlossBERT(Sent-CLS)",
"Methodology ::: GlossBERT ::: GlossBERT(Sent-CLS-WS)",
"Experiments ::: Datasets",
"Experiments ::: Datasets ::: Training Dataset",
"Experiments ::: Datasets ::: Evaluation Datasets",
"Experiments ::: Datasets ::: WordNet",
"Experiments ::: Settings",
"Experiments ::: Results",
"Experiments ::: Discussion",
"Conclusion",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"388ed9ddef50a9419c1cf6d0e81e09888ef599c6"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 3: F1-score (%) for fine-grained English all-words WSD on the test sets in the framework of Raganato et al. (2017b) (including the development set SE07). Bold font indicates best systems. The five blocks list the MFS baseline, two knowledge-based systems, two traditional word expert supervised systems, six recent neural-based systems and our systems, respectively. Results in first three blocks come from Raganato et al. (2017b), and others from the corresponding papers."
],
"extractive_spans": [],
"free_form_answer": "Two knowledge-based systems,\ntwo traditional word expert supervised systems, six recent neural-based systems, and one BERT feature-based system.",
"highlighted_evidence": [
"FLOAT SELECTED: Table 3: F1-score (%) for fine-grained English all-words WSD on the test sets in the framework of Raganato et al. (2017b) (including the development set SE07). Bold font indicates best systems. The five blocks list the MFS baseline, two knowledge-based systems, two traditional word expert supervised systems, six recent neural-based systems and our systems, respectively. Results in first three blocks come from Raganato et al. (2017b), and others from the corresponding papers."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"c75c7abaf45df68e45cd9103d3c72d4eb373ef2b",
"f5149430c77e0480552056510df9942980f11ea6"
],
"answer": [
{
"evidence": [
"BERT can explicitly model the relationship of a pair of texts, which has shown to be beneficial to many pair-wise natural language understanding tasks. In order to fully leverage gloss information, we propose GlossBERT to construct context-gloss pairs from all possible senses of the target word in WordNet, thus treating WSD task as a sentence-pair classification problem."
],
"extractive_spans": [
"construct context-gloss pairs from all possible senses of the target word in WordNet, thus treating WSD task as a sentence-pair classification problem"
],
"free_form_answer": "",
"highlighted_evidence": [
" In order to fully leverage gloss information, we propose GlossBERT to construct context-gloss pairs from all possible senses of the target word in WordNet, thus treating WSD task as a sentence-pair classification problem."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In this paper, we focus on how to better leverage gloss information in a supervised neural WSD system. Recently, the pre-trained language models, such as ELMo BIBREF14 and BERT BIBREF15, have shown their effectiveness to alleviate the effort of feature engineering. Especially, BERT has achieved excellent results in question answering (QA) and natural language inference (NLI). We construct context-gloss pairs from glosses of all possible senses (in WordNet) of the target word, thus treating WSD task as a sentence-pair classification problem. We fine-tune the pre-trained BERT model and achieve new state-of-the-art results on WSD task. In particular, our contribution is two-fold:"
],
"extractive_spans": [
"construct context-gloss pairs from glosses of all possible senses (in WordNet) of the target word"
],
"free_form_answer": "",
"highlighted_evidence": [
"In this paper, we focus on how to better leverage gloss information in a supervised neural WSD system. Recently, the pre-trained language models, such as ELMo BIBREF14 and BERT BIBREF15, have shown their effectiveness to alleviate the effort of feature engineering. Especially, BERT has achieved excellent results in question answering (QA) and natural language inference (NLI). We construct context-gloss pairs from glosses of all possible senses (in WordNet) of the target word, thus treating WSD task as a sentence-pair classification problem. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"317e6b786995194b3e28ce83e510af13be46441c",
"5a64516e70ed061688a74cfa2029957f60b8500e"
],
"answer": [
{
"evidence": [
"Following previous work BIBREF13, BIBREF12, BIBREF10, BIBREF17, BIBREF9, BIBREF7, we choose SemCor3.0 as training corpus, which is the largest corpus manually annotated with WordNet sense for WSD."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Following previous work BIBREF13, BIBREF12, BIBREF10, BIBREF17, BIBREF9, BIBREF7, we choose SemCor3.0 as training corpus, which is the largest corpus manually annotated with WordNet sense for WSD."
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"a05d070223e2235f35b76c4691659558b92f2b92",
"d2a82db012bbd75f87a131c88ccc19d91187a1a5"
],
"answer": [
{
"evidence": [
"We use the pre-trained uncased BERT$_\\mathrm {BASE}$ model for fine-tuning, because we find that BERT$_\\mathrm {LARGE}$ model performs slightly worse than BERT$_\\mathrm {BASE}$ in this task. The number of Transformer blocks is 12, the number of the hidden layer is 768, the number of self-attention heads is 12, and the total number of parameters of the pre-trained model is 110M. When fine-tuning, we use the development set (SE07) to find the optimal settings for our experiments. We keep the dropout probability at 0.1, set the number of epochs to 4. The initial learning rate is 2e-5, and the batch size is 64."
],
"extractive_spans": [],
"free_form_answer": "small BERT",
"highlighted_evidence": [
"We use the pre-trained uncased BERT$_\\mathrm {BASE}$ model for fine-tuning, because we find that BERT$_\\mathrm {LARGE}$ model performs slightly worse than BERT$_\\mathrm {BASE}$ in this task. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We use the pre-trained uncased BERT$_\\mathrm {BASE}$ model for fine-tuning, because we find that BERT$_\\mathrm {LARGE}$ model performs slightly worse than BERT$_\\mathrm {BASE}$ in this task. The number of Transformer blocks is 12, the number of the hidden layer is 768, the number of self-attention heads is 12, and the total number of parameters of the pre-trained model is 110M. When fine-tuning, we use the development set (SE07) to find the optimal settings for our experiments. We keep the dropout probability at 0.1, set the number of epochs to 4. The initial learning rate is 2e-5, and the batch size is 64."
],
"extractive_spans": [],
"free_form_answer": "small BERT",
"highlighted_evidence": [
"We use the pre-trained uncased BERT$_\\mathrm {BASE}$ model for fine-tuning, because we find that BERT$_\\mathrm {LARGE}$ model performs slightly worse than BERT$_\\mathrm {BASE}$ in this task. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"9d28821616f01cc231834ea6d64d300d24fbc83a"
],
"answer": [
{
"evidence": [
"The fourth block shows several recent neural-based methods. Bi-LSTM BIBREF11 is a baseline for neural models. Bi-LSTM$_{+ att. + LEX + POS}$ BIBREF10 is a multi-task learning framework for WSD, POS tagging, and LEX with self-attention mechanism, which converts WSD to a sequence learning task. GAS$_{ext}$ BIBREF12 is a variant of GAS which is a gloss-augmented variant of the memory network by extending gloss knowledge. CAN$^s$ and HCAN BIBREF13 are sentence-level and hierarchical co-attention neural network models which leverage gloss knowledge."
],
"extractive_spans": [
"converts WSD to a sequence learning task",
" leverage gloss knowledge",
"by extending gloss knowledge"
],
"free_form_answer": "",
"highlighted_evidence": [
"Bi-LSTM BIBREF11 is a baseline for neural models. Bi-LSTM$_{+ att. + LEX + POS}$ BIBREF10 is a multi-task learning framework for WSD, POS tagging, and LEX with self-attention mechanism, which converts WSD to a sequence learning task. GAS$_{ext}$ BIBREF12 is a variant of GAS which is a gloss-augmented variant of the memory network by extending gloss knowledge. CAN$^s$ and HCAN BIBREF13 are sentence-level and hierarchical co-attention neural network models which leverage gloss knowledge."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no"
],
"question": [
"What is the state of the art system mentioned?",
"Do they incoprorate WordNet into the model?",
"Is SemCor3.0 reflective of English language data in general?",
"Do they use large or small BERT?",
"How does the neural network architecture accomodate an unknown amount of senses per word?"
],
"question_id": [
"e82fa03f1638a8c59ceb62bb9a6b41b498950e1f",
"7ab9c0b4ceca1c142ff068f85015a249b14282d0",
"00050f7365e317dc0487e282a4c33804b58b1fb3",
"c5b0ed5db65051eebd858beaf303809aa815e8e5",
"10fb7dc031075946153baf0a0599e126de29e3a4"
],
"question_writer": [
"2a18a3656984d04249f100633e4c1003417a2255",
"2a18a3656984d04249f100633e4c1003417a2255",
"2a18a3656984d04249f100633e4c1003417a2255",
"2a18a3656984d04249f100633e4c1003417a2255",
"2a18a3656984d04249f100633e4c1003417a2255"
],
"search_query": [
"expert",
"expert",
"expert",
"expert",
"expert"
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Table 1: The construction methods. The sentence is taken from SemEval-2007 WSD dataset. The ellipsis “...” indicates the remainder of the sentence or the gloss.",
"Table 2: Statistics of the different parts of speech annotations in English all-words WSD datasets.",
"Table 3: F1-score (%) for fine-grained English all-words WSD on the test sets in the framework of Raganato et al. (2017b) (including the development set SE07). Bold font indicates best systems. The five blocks list the MFS baseline, two knowledge-based systems, two traditional word expert supervised systems, six recent neural-based systems and our systems, respectively. Results in first three blocks come from Raganato et al. (2017b), and others from the corresponding papers."
],
"file": [
"2-Table1-1.png",
"3-Table2-1.png",
"4-Table3-1.png"
]
} | [
"What is the state of the art system mentioned?",
"Do they use large or small BERT?"
] | [
[
"1908.07245-4-Table3-1.png"
],
[
"1908.07245-Experiments ::: Settings-0"
]
] | [
"Two knowledge-based systems,\ntwo traditional word expert supervised systems, six recent neural-based systems, and one BERT feature-based system.",
"small BERT"
] | 213 |
1901.01010 | A Joint Model for Multimodal Document Quality Assessment | The quality of a document is affected by various factors, including grammaticality, readability, stylistics, and expertise depth, making the task of document quality assessment a complex one. In this paper, we explore this task in the context of assessing the quality of Wikipedia articles and academic papers. Observing that the visual rendering of a document can capture implicit quality indicators that are not present in the document text --- such as images, font choices, and visual layout --- we propose a joint model that combines the text content with a visual rendering of the document for document quality assessment. Experimental results over two datasets reveal that textual and visual features are complementary, achieving state-of-the-art results. | {
"paragraphs": [
[
"The task of document quality assessment is to automatically assess a document according to some predefined inventory of quality labels. This can take many forms, including essay scoring (quality = language quality, coherence, and relevance to a topic), job application filtering (quality = suitability for role + visual/presentational quality of the application), or answer selection in community question answering (quality = actionability + relevance of the answer to the question). In the case of this paper, we focus on document quality assessment in two contexts: Wikipedia document quality classification, and whether a paper submitted to a conference was accepted or not.",
"Automatic quality assessment has obvious benefits in terms of time savings and tractability in contexts where the volume of documents is large. In the case of dynamic documents (possibly with multiple authors), such as in the case of Wikipedia, it is particularly pertinent, as any edit potentially has implications for the quality label of that document (and around 10 English Wikipedia documents are edited per second). Furthermore, when the quality assessment task is decentralized (as in the case of Wikipedia and academic paper assessment), quality criteria are often applied inconsistently by different people, where an automatic document quality assessment system could potentially reduce inconsistencies and enable immediate author feedback.",
"Current studies on document quality assessment mainly focus on textual features. For example, BIBREF0 examine features such as the article length and the number of headings to predict the quality class of a Wikipedia article. In contrast to these studies, in this paper, we propose to combine text features with visual features, based on a visual rendering of the document. Figure 1 illustrates our intuition, relative to Wikipedia articles. Without being able to read the text, we can tell that the article in Figure 1 has higher quality than Figure 1 , as it has a detailed infobox, extensive references, and a variety of images. Based on this intuition, we aim to answer the following question: can we achieve better accuracy on document quality assessment by complementing textual features with visual features?",
"Our visual model is based on fine-tuning an Inception V3 model BIBREF1 over visual renderings of documents, while our textual model is based on a hierarchical biLSTM. We further combine the two into a joint model. We perform experiments on two datasets: a Wikipedia dataset novel to this paper, and an arXiv dataset provided by BIBREF2 split into three sub-parts based on subject category. Experimental results on the visual renderings of documents show that implicit quality indicators, such as images and visual layout, can be captured by an image classifier, at a level comparable to a text classifier. When we combine the two models, we achieve state-of-the-art results over 3/4 of our datasets.",
"This paper makes the following contributions:",
"All code and data associated with this research will be released on publication."
],
[
"A variety of approaches have been proposed for document quality assessment across different domains: Wikipedia article quality assessment, academic paper rating, content quality assessment in community question answering (cQA), and essay scoring. Among these approaches, some use hand-crafted features while others use neural networks to learn features from documents. For each domain, we first briefly describe feature-based approaches and then review neural network-based approaches. Wikipedia article quality assessment: Quality assessment of Wikipedia articles is a task that assigns a quality class label to a given Wikipedia article, mirroring the quality assessment process that the Wikipedia community carries out manually. Many approaches have been proposed that use features from the article itself, meta-data features (e.g., the editors, and Wikipedia article revision history), or a combination of the two. Article-internal features capture information such as whether an article is properly organized, with supporting evidence, and with appropriate terminology. For example, BIBREF3 use writing styles represented by binarized character trigram features to identify featured articles. BIBREF4 and BIBREF0 explore the number of headings, images, and references in the article. BIBREF5 use nine readability scores, such as the percentage of difficult words in the document, to measure the quality of the article. Meta-data features, which are indirect indicators of article quality, are usually extracted from revision history, and the interaction between editors and articles. For example, one heuristic that has been proposed is that higher-quality articles have more edits BIBREF6 , BIBREF7 . BIBREF8 use the percentage of registered editors and the total number of editors of an article. Article–editor dependencies have also been explored. For example, BIBREF9 use the authority of editors to measure the quality of Wikipedia articles, where the authority of editors is determined by the articles they edit. Deep learning approaches to predicting Wikipedia article quality have also been proposed. For example, BIBREF10 use a version of doc2vec BIBREF11 to represent articles, and feed the document embeddings into a four hidden layer neural network. BIBREF12 first obtain sentence representations by averaging words within a sentence, and then apply a biLSTM BIBREF13 to learn a document-level representation, which is combined with hand-crafted features as side information. BIBREF14 exploit two stacked biLSTMs to learn document representations.",
"Academic paper rating: Academic paper rating is a relatively new task in NLP/AI, with the basic formulation being to automatically predict whether to accept or reject a paper. BIBREF2 explore hand-crafted features, such as the length of the title, whether specific words (such as outperform, state-of-the-art, and novel) appear in the abstract, and an embedded representation of the abstract as input to different downstream learners, such as logistic regression, decision tree, and random forest. BIBREF15 exploit a modularized hierarchical convolutional neural network (CNN), where each paper section is treated as a module. For each paper section, they train an attention-based CNN, and an attentive pooling layer is applied to the concatenated representation of each section, which is then fed into a softmax layer.",
"Content quality assessment in cQA: Automatic quality assessment in cQA is the task of determining whether an answer is of high quality, selected as the best answer, or ranked higher than other answers. To measure answer content quality in cQA, researchers have exploited various features from different sources, such as the answer content itself, the answerer's profile, interactions among users, and usage of the content. The most common feature used is the answer length BIBREF16 , BIBREF17 , with other features including: syntactic and semantic features, such as readability scores. BIBREF18 ; similarity between the question and the answer at lexical, syntactic, and semantic levels BIBREF18 , BIBREF19 , BIBREF20 ; or user data (e.g., a user's status points or the number of answers written by the user). There have also been approaches using neural networks. For example, BIBREF21 combine CNN-learned representations with hand-crafted features to predict answer quality. BIBREF22 use a 2-dimensional CNN to learn the semantic relevance of an answer to the question, and apply an LSTM to the answer sequence to model thread context. BIBREF23 and BIBREF24 model the problem similarly to machine translation quality estimation, treating answers as competing translation hypotheses and the question as the reference translation, and apply neural machine translation to the problem. Essay scoring: Automated essay scoring is the task of assigning a score to an essay, usually in the context of assessing the language ability of a language learner. The quality of an essay is affected by the following four primary dimensions: topic relevance, organization and coherence, word usage and sentence complexity, and grammar and mechanics. To measure whether an essay is relevant to its “prompt” (the description of the essay topic), lexical and semantic overlap is commonly used BIBREF25 , BIBREF26 . BIBREF27 explore word features, such as the number of verb formation errors, average word frequency, and average word length, to measure word usage and lexical complexity. BIBREF28 use sentence structure features to measure sentence variety. The effects of grammatical and mechanic errors on the quality of an essay are measured via word and part-of-speech $n$ -gram features and “mechanics” features BIBREF29 (e.g., spelling, capitalization, and punctuation), respectively. BIBREF30 , BIBREF31 , and BIBREF32 use an LSTM to obtain an essay representation, which is used as the basis for classification. Similarly, BIBREF33 utilize a CNN to obtain sentence representation and an LSTM to obtain essay representation, with an attention layer at both the sentence and essay levels."
],
[
"We treat document quality assessment as a classification problem, i.e., given a document, we predict its quality class (e.g., whether an academic paper should be accepted or rejected). The proposed model is a joint model that integrates visual features learned through Inception V3 with textual features learned through a biLSTM. In this section, we present the details of the visual and textual embeddings, and finally describe how we combine the two. We return to discuss hyper-parameter settings and the experimental configuration in the Experiments section."
],
[
"A wide range of models have been proposed to tackle the image classification task, such as VGG BIBREF34 , ResNet BIBREF35 , Inception V3 BIBREF1 , and Xception BIBREF36 . However, to the best of our knowledge, there is no existing work that has proposed to use visual renderings of documents to assess document quality. In this paper, we use Inception V3 pretrained on ImageNet (“Inception” hereafter) to obtain visual embeddings of documents, noting that any image classifier could be applied to our task. The input to Inception is a visual rendering (screenshot) of a document, and the output is a visual embedding, which we will later integrate with our textual embedding.",
"Based on the observation that it is difficult to decide what types of convolution to apply to each layer (such as 3 $\\times $ 3 or 5 $\\times $ 5), the basic Inception model applies multiple convolution filters in parallel and concatenates the resulting features, which are fed into the next layer. This has the benefit of capturing both local features through smaller convolutions and abstracted features through larger convolutions. Inception is a hybrid of multiple Inception models of different architectures. To reduce computational cost, Inception also modifies the basic model by applying a 1 $\\times $ 1 convolution to the input and factorizing larger convolutions into smaller ones."
],
[
"We adopt a bi-directional LSTM model to generate textual embeddings for document quality assessment, following the method of BIBREF12 (“biLSTM” hereafter). The input to biLSTM is a textual document, and the output is a textual embedding, which will later integrate with the visual embedding.",
"For biLSTM, each word is represented as a word embedding BIBREF37 , and an average-pooling layer is applied to the word embeddings to obtain the sentence embedding, which is fed into a bi-directional LSTM to generate the document embedding from the sentence embeddings. Then a max-pooling layer is applied to select the most salient features from the component sentences."
],
[
"The proposed joint model (“Joint” hereafter) combines the visual and textual embeddings (output of Inception and biLSTM) via a simple feed-forward layer and softmax over the document label set, as shown in Figure 2 . We optimize our model based on cross-entropy loss."
],
[
"In this section, we first describe the two datasets used in our experiments: (1) Wikipedia, and (2) arXiv. Then, we report the experimental details and results."
],
[
"The Wikipedia dataset consists of articles from English Wikipedia, with quality class labels assigned by the Wikipedia community. Wikipedia articles are labelled with one of six quality classes, in descending order of quality: Featured Article (“FA”), Good Article (“GA”), B-class Article (“B”), C-class Article (“C”), Start Article (“Start”), and Stub Article (“Stub”). A description of the criteria associated with the different classes can be found in the Wikipedia grading scheme page. The quality class of a Wikipedia article is assigned by Wikipedia reviewers or any registered user, who can discuss through the article's talk page to reach consensus. We constructed the dataset by first crawling all articles from each quality class repository, e.g., we get FA articles by crawling pages from the FA repository: https://en.wikipedia.org/wiki/Category:Featured_articles. This resulted in around 5K FA, 28K GA, 212K B, 533K C, 2.6M Start, and 3.2M Stub articles.",
"We randomly sampled 5,000 articles from each quality class and removed all redirect pages, resulting in a dataset of 29,794 articles. As the wikitext contained in each document contains markup relating to the document category such as {Featured Article} or {geo-stub}, which reveals the label, we remove such information. We additionally randomly partitioned this dataset into training, development, and test splits based on a ratio of 8:1:1. Details of the dataset are summarized in Table 1 .",
"We generate a visual representation of each document via a 1,000 $\\times $ 2,000-pixel screenshot of the article via a PhantomJS script over the rendered version of the article, ensuring that the screenshot and wikitext versions of the article are the same version. Any direct indicators of document quality (such as the FA indicator, which is a bronze star icon in the top right corner of the webpage) are removed from the screenshot.",
"The arXiv dataset BIBREF2 consists of three subsets of academic articles under the arXiv repository of Computer Science (cs), from the three subject areas of: Artificial Intelligence (cs.ai), Computation and Language (cs.cl), and Machine Learning (cs.lg). In line with the original dataset formulation BIBREF2 , a paper is considered to have been accepted (i.e. is positively labeled) if it matches a paper in the DBLP database or is otherwise accepted by any of the following conferences: ACL, EMNLP, NAACL, EACL, TACL, NIPS, ICML, ICLR, or AAAI. Failing this, it is considered to be rejected (noting that some of the papers may not have been submitted to one of these conferences). The median numbers of pages for papers in cs.ai, cs.cl, and cs.lg are 11, 10, and 12, respectively. To make sure each page in the PDF file has the same size in the screenshot, we crop the PDF file of a paper to the first 12; we pad the PDF file with blank pages if a PDF file has less than 12 pages, using the PyPDF2 Python package. We then use ImageMagick to convert the 12-page PDF file to a single 1,000 $\\times $ 2,000 pixel screenshot. Table 2 details this dataset, where the “Accepted” column denotes the percentage of positive instances (accepted papers) in each subset."
],
[
"As discussed above, our model has two main components — biLSTM and Inception— which generate textual and visual representations, respectively. For the biLSTM component, the documents are preprocessed as described in BIBREF12 , where an article is divided into sentences and tokenized using NLTK BIBREF38 . Words appearing more than 20 times are retained when building the vocabulary. All other words are replaced by the special UNK token. We use the pre-trained GloVe BIBREF39 50-dimensional word embeddings to represent words. For words not in GloVe, word embeddings are randomly initialized based on sampling from a uniform distribution $U(-1, 1)$ . All word embeddings are updated in the training process. We set the LSTM hidden layer size to 256. The concatenation of the forward and backward LSTMs thus gives us 512 dimensions for the document embedding. A dropout layer is applied at the sentence and document level, respectively, with a probability of 0.5.",
"For Inception, we adopt data augmentation techniques in the training with a “nearest” filling mode, a zoom range of 0.1, a width shift range of 0.1, and a height shift range of 0.1. As the original screenshots have the size of 1,000 $\\times 2$ ,000 pixels, they are resized to 500 $\\times $ 500 to feed into Inception, where the input shape is (500, 500, 3). A dropout layer is applied with a probability of 0.5. Then, a GlobalAveragePooling2D layer is applied, which produces a 2,048 dimensional representation.",
"For the Joint model, we get a representation of 2,560 dimensions by concatenating the 512 dimensional representation from the biLSTM with the 2,048 dimensional representation from Inception. The dropout layer is applied to the two components with a probability of 0.5. For biLSTM, we use a mini-batch size of 128 and a learning rate of 0.001. For both Inception and joint model, we use a mini-batch size of 16 and a learning rate of 0.0001. All hyper-parameters were set empirically over the development data, and the models were optimized using the Adam optimizer BIBREF40 .",
"In the training phase, the weights in Inception are initialized by parameters pretrained on ImageNet, and the weights in biLSTM are randomly initialized (except for the word embeddings). We train each model for 50 epochs. However, to prevent overfitting, we adopt early stopping, where we stop training the model if the performance on the development set does not improve for 20 epochs. For evaluation, we use (micro-)accuracy, following previous studies BIBREF5 , BIBREF2 ."
],
[
"We compare our models against the following five baselines:",
"Majority: the model labels all test samples with the majority class of the training data.",
"Benchmark: a benchmark method from the literature. In the case of Wikipedia, this is BIBREF5 , who use structural features and readability scores as features to build a random forest classifier; for arXiv, this is BIBREF2 , who use hand-crafted features, such as the number of references and TF-IDF weighted bag-of-words in abstract, to build a classifier based on the best of logistic regression, multi-layer perception, and AdaBoost.",
"Doc2Vec: doc2vec BIBREF11 to learn document embeddings with a dimension of 500, and a 4-layer feed-forward classification model on top of this, with 2000, 1000, 500, and 200 dimensions, respectively.",
"biLSTM: first derive a sentence representation by averaging across words in a sentence, then feed the sentence representation into a biLSTM and a maxpooling layer over output sequence to learn a document level representation with a dimension of 512, which is used to predict document quality.",
"Inception $_{\\text{fixed}}$ : the frozen Inception model, where only parameters in the last layer are fine-tuned during training.",
"The hyper-parameters of Benchmark, Doc2Vec, and biLSTM are based on the corresponding papers except that: (1) we fine-tune the feed forward layer of Doc2Vec on the development set and train the model 300 epochs on Wikipedia and 50 epochs on arXiv; (2) we do not use hand-crafted features for biLSTM as we want the baselines to be comparable to our models, and the main focus of this paper is not to explore the effects of hand-crafted features (e.g., see BIBREF12 )."
],
[
"Table 3 shows the performance of the different models over our two datasets, in the form of the average accuracy on the test set (along with the standard deviation) over 10 runs, with different random initializations.",
"On Wikipedia, we observe that the performance of biLSTM, Inception, and Joint is much better than that of all four baselines. Inception achieves 2.9% higher accuracy than biLSTM. The performance of Joint achieves an accuracy of 59.4%, which is 5.3% higher than using textual features alone (biLSTM) and 2.4% higher than using visual features alone (Inception). Based on a one-tailed Wilcoxon signed-rank test, the performance of Joint is statistically significant ( $p<0.05$ ). This shows that the textual and visual features complement each other, achieving state-of-the-art results in combination.",
"For arXiv, baseline methods Majority, Benchmark, and Inception $_{\\text{fixed}}$ outperform biLSTM over cs.ai, in large part because of the class imbalance in this dataset (90% of papers are rejected). Surprisingly, Inception $_{\\text{fixed}}$ is better than Majority and Benchmark over the arXiv cs.lg subset, which verifies the usefulness of visual features, even when only the last layer is fine-tuned. Table 3 also shows that Inception and biLSTM achieve similar performance on arXiv, showing that textual and visual representations are equally discriminative: Inception and biLSTM are indistinguishable over cs.cl; biLSTM achieves 1.8% higher accuracy over cs.lg, while Inception achieves 1.3% higher accuracy over cs.ai. Once again, the Joint model achieves the highest accuracy on cs.ai and cs.cl by combining textual and visual representations (at a level of statistical significance for cs.ai). This, again, confirms that textual and visual features complement each other, and together they achieve state-of-the-art results. On arXiv cs.lg, Joint achieves a 0.6% higher accuracy than Inception by combining visual features and textual features, but biLSTM achieves the highest accuracy. One characteristic of cs.lg documents is that they tend to contain more equations than the other two arXiv datasets, and preliminary analysis suggests that the biLSTM is picking up on a correlation between the volume/style of mathematical presentation and the quality of the document."
],
[
"In this section, we first analyze the performance of Inception and Joint. We also analyze the performance of different models on different quality classes. The high-level representations learned by different models are also visualized and discussed. As the Wikipedia test set is larger and more balanced than that of arXiv, our analysis will focus on Wikipedia."
],
[
"To better understand the performance of Inception, we generated the gradient-based class activation map BIBREF41 , by maximizing the outputs of each class in the penultimate layer, as shown in Figure 3 . From Figure 3 and Figure 3 , we can see that Inception identifies the two most important regions (one at the top corresponding to the table of contents, and the other at the bottom, capturing both document length and references) that contribute to the FA class prediction, and a region in the upper half of the image that contributes to the GA class prediction (capturing the length of the article body). From Figure 3 and Figure 3 , we can see that the most important regions in terms of B and C class prediction capture images (down the left and right of the page, in the case of B and C), and document length/references. From Figure 3 and Figure 3 , we can see that Inception finds that images in the top right corner are the strongest predictor of Start class prediction, and (the lack of) images/the link bar down the left side of the document are the most important for Stub class prediction."
],
[
"Table 4 shows the confusion matrix of Joint on Wikipedia. We can see that more than 50% of documents for each quality class are correctly classified, except for the C class where more documents are misclassified into B. Analysis shows that when misclassified, documents are usually misclassified into adjacent quality classes, which can be explained by the Wikipedia grading scheme, where the criteria for adjacent quality classes are more similar.",
"We also provide a breakdown of precision (“ $\\mathcal {P}$ ”), recall (“ $\\mathcal {R}$ ”), and F1 score (“ $\\mathcal {F}_{\\beta =1}$ ”) for biLSTM, Inception, and Joint across the quality classes in Table 5 . We can see that Joint achieves the highest accuracy in 11 out of 18 cases. It is also worth noting that all models achieve higher scores for FA, GA, and Stub articles than B, C and Start articles. This can be explained in part by the fact that FA and GA articles must pass an official review based on structured criteria, and in part by the fact that Stub articles are usually very short, which is discriminative for Inception, and Joint. All models perform worst on the B and C quality classes. It is difficult to differentiate B articles from C articles even for Wikipedia contributors. As evidence of this, when we crawled a new dataset including talk pages with quality class votes from Wikipedia contributors, we found that among articles with three or more quality labels, over 20% percent of B and C articles have inconsistent votes from Wikipedia contributors, whereas for FA and GA articles the number is only 0.7%.",
"We further visualize the learned document representations of biLSTM, Inception, and Joint in the form of a t-SNE plot BIBREF42 in Figure 4 . The degree of separation between Start and Stub achieved by Inception is much greater than for biLSTM, with the separation between Start and Stub achieved by Joint being the clearest among the three models. Inception and Joint are better than biLSTM at separating Start and C. Joint achieves slightly better performance than Inception in separating GA and FA. We can also see that it is difficult for all models to separate B and C, which is consistent with the findings of Tables 4 and 5 ."
],
[
"We proposed to use visual renderings of documents to capture implicit document quality indicators, such as font choices, images, and visual layout, which are not captured in textual content. We applied neural network models to capture visual features given visual renderings of documents. Experimental results show that we achieve a 2.9% higher accuracy than state-of-the-art approaches based on textual features over Wikipedia, and performance competitive with or surpassing state-of-the-art approaches over arXiv. We further proposed a joint model, combining textual and visual representations, to predict the quality of a document. Experimental results show that our joint model outperforms the visual-only model in all cases, and the text-only model on Wikipedia and two subsets of arXiv. These results underline the feasibility of assessing document quality via visual features, and the complementarity of visual and textual document representations for quality assessment."
]
],
"section_name": [
"Introduction",
"Related Work",
"The Proposed Joint Model",
"Visual Embedding Learning",
"Textual Embedding Learning",
"The Joint Model",
"Experiments",
"Datasets",
"Experimental Setting",
"Baseline Approaches",
"Experimental Results",
"Analysis",
"Inception",
"Joint",
"Conclusions"
]
} | {
"answers": [
{
"annotation_id": [
"5c89df6284cd22b539a9f6613273c5fab1e2a8f6"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a"
]
},
{
"annotation_id": [
"316e121b1e4fd2ca8a99cb9fc9ae3d8b55250ca5",
"7c61fe9072dbbffabed837c8c2ade9cce21a70d6"
],
"answer": [
{
"evidence": [
"Our visual model is based on fine-tuning an Inception V3 model BIBREF1 over visual renderings of documents, while our textual model is based on a hierarchical biLSTM. We further combine the two into a joint model. We perform experiments on two datasets: a Wikipedia dataset novel to this paper, and an arXiv dataset provided by BIBREF2 split into three sub-parts based on subject category. Experimental results on the visual renderings of documents show that implicit quality indicators, such as images and visual layout, can be captured by an image classifier, at a level comparable to a text classifier. When we combine the two models, we achieve state-of-the-art results over 3/4 of our datasets.",
"We proposed to use visual renderings of documents to capture implicit document quality indicators, such as font choices, images, and visual layout, which are not captured in textual content. We applied neural network models to capture visual features given visual renderings of documents. Experimental results show that we achieve a 2.9% higher accuracy than state-of-the-art approaches based on textual features over Wikipedia, and performance competitive with or surpassing state-of-the-art approaches over arXiv. We further proposed a joint model, combining textual and visual representations, to predict the quality of a document. Experimental results show that our joint model outperforms the visual-only model in all cases, and the text-only model on Wikipedia and two subsets of arXiv. These results underline the feasibility of assessing document quality via visual features, and the complementarity of visual and textual document representations for quality assessment."
],
"extractive_spans": [
"visual model is based on fine-tuning an Inception V3 model BIBREF1 over visual renderings of documents, while our textual model is based on a hierarchical biLSTM. We further combine the two into a joint model. ",
"neural network models"
],
"free_form_answer": "",
"highlighted_evidence": [
"Our visual model is based on fine-tuning an Inception V3 model BIBREF1 over visual renderings of documents, while our textual model is based on a hierarchical biLSTM. We further combine the two into a joint model. ",
"We applied neural network models to capture visual features given visual renderings of documents. Experimental results show that we achieve a 2.9% higher accuracy than state-of-the-art approaches based on textual features over Wikipedia, and performance competitive with or surpassing state-of-the-art approaches over arXiv. We further proposed a joint model, combining textual and visual representations, to predict the quality of a document. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We treat document quality assessment as a classification problem, i.e., given a document, we predict its quality class (e.g., whether an academic paper should be accepted or rejected). The proposed model is a joint model that integrates visual features learned through Inception V3 with textual features learned through a biLSTM. In this section, we present the details of the visual and textual embeddings, and finally describe how we combine the two. We return to discuss hyper-parameter settings and the experimental configuration in the Experiments section."
],
"extractive_spans": [
"Inception V3",
"biLSTM"
],
"free_form_answer": "",
"highlighted_evidence": [
"The proposed model is a joint model that integrates visual features learned through Inception V3 with textual features learned through a biLSTM."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a",
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe"
]
},
{
"annotation_id": [
"0974b080edadfde80be0ee44ba02f449181fcc24",
"81411c4526e552b8003efe274f0a5e70f37f6284"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": false,
"yes_no": false
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe",
"1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a"
]
},
{
"annotation_id": [
"3ac21164fc20e40db8abff77a1facf6ee0f31b5f",
"b950b110bcde785db7c6dd0cc0fea4d8027ef76d"
],
"answer": [
{
"evidence": [
"We proposed to use visual renderings of documents to capture implicit document quality indicators, such as font choices, images, and visual layout, which are not captured in textual content. We applied neural network models to capture visual features given visual renderings of documents. Experimental results show that we achieve a 2.9% higher accuracy than state-of-the-art approaches based on textual features over Wikipedia, and performance competitive with or surpassing state-of-the-art approaches over arXiv. We further proposed a joint model, combining textual and visual representations, to predict the quality of a document. Experimental results show that our joint model outperforms the visual-only model in all cases, and the text-only model on Wikipedia and two subsets of arXiv. These results underline the feasibility of assessing document quality via visual features, and the complementarity of visual and textual document representations for quality assessment."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Experimental results show that our joint model outperforms the visual-only model in all cases, and the text-only model on Wikipedia and two subsets of arXiv."
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"We proposed to use visual renderings of documents to capture implicit document quality indicators, such as font choices, images, and visual layout, which are not captured in textual content. We applied neural network models to capture visual features given visual renderings of documents. Experimental results show that we achieve a 2.9% higher accuracy than state-of-the-art approaches based on textual features over Wikipedia, and performance competitive with or surpassing state-of-the-art approaches over arXiv. We further proposed a joint model, combining textual and visual representations, to predict the quality of a document. Experimental results show that our joint model outperforms the visual-only model in all cases, and the text-only model on Wikipedia and two subsets of arXiv. These results underline the feasibility of assessing document quality via visual features, and the complementarity of visual and textual document representations for quality assessment.",
"FLOAT SELECTED: Table 1: Experimental results. The best result for each dataset is indicated in bold, and marked with “†” if it is significantly higher than the second best result (based on a one-tailed Wilcoxon signed-rank test; p < 0.05). The results of Benchmark on Peer Review are from the original paper, where the standard deviation values were not reported."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"We applied neural network models to capture visual features given visual renderings of documents. Experimental results show that we achieve a 2.9% higher accuracy than state-of-the-art approaches based on textual features over Wikipedia, and performance competitive with or surpassing state-of-the-art approaches over arXiv. We further proposed a joint model, combining textual and visual representations, to predict the quality of a document. Experimental results show that our joint model outperforms the visual-only model in all cases, and the text-only model on Wikipedia and two subsets of arXiv. ",
"FLOAT SELECTED: Table 1: Experimental results. The best result for each dataset is indicated in bold, and marked with “†” if it is significantly higher than the second best result (based on a one-tailed Wilcoxon signed-rank test; p < 0.05). The results of Benchmark on Peer Review are from the original paper, where the standard deviation values were not reported."
],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe",
"1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a"
]
},
{
"annotation_id": [
"ed1a1df25ec1afd52388f24b9d8281481956d845"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 1: Experimental results. The best result for each dataset is indicated in bold, and marked with “†” if it is significantly higher than the second best result (based on a one-tailed Wilcoxon signed-rank test; p < 0.05). The results of Benchmark on Peer Review are from the original paper, where the standard deviation values were not reported."
],
"extractive_spans": [],
"free_form_answer": "59.4% on wikipedia dataset, 93.4% on peer-reviewed archive AI papers, 77.1% on peer-reviewed archive Computation and Language papers, and 79.9% on peer-reviewed archive Machine Learning papers",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Experimental results. The best result for each dataset is indicated in bold, and marked with “†” if it is significantly higher than the second best result (based on a one-tailed Wilcoxon signed-rank test; p < 0.05). The results of Benchmark on Peer Review are from the original paper, where the standard deviation values were not reported."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a"
]
},
{
"annotation_id": [
"11320bef73e775a60a772fc0e3f732ec8e5ade41"
],
"answer": [
{
"evidence": [
"We proposed to use visual renderings of documents to capture implicit document quality indicators, such as font choices, images, and visual layout, which are not captured in textual content. We applied neural network models to capture visual features given visual renderings of documents. Experimental results show that we achieve a 2.9% higher accuracy than state-of-the-art approaches based on textual features over Wikipedia, and performance competitive with or surpassing state-of-the-art approaches over arXiv. We further proposed a joint model, combining textual and visual representations, to predict the quality of a document. Experimental results show that our joint model outperforms the visual-only model in all cases, and the text-only model on Wikipedia and two subsets of arXiv. These results underline the feasibility of assessing document quality via visual features, and the complementarity of visual and textual document representations for quality assessment."
],
"extractive_spans": [],
"free_form_answer": "It depends on the dataset. Experimental results over two datasets reveal that textual and visual features are complementary. ",
"highlighted_evidence": [
"We proposed to use visual renderings of documents to capture implicit document quality indicators, such as font choices, images, and visual layout, which are not captured in textual content. We applied neural network models to capture visual features given visual renderings of documents. Experimental results show that we achieve a 2.9% higher accuracy than state-of-the-art approaches based on textual features over Wikipedia, and performance competitive with or surpassing state-of-the-art approaches over arXiv. We further proposed a joint model, combining textual and visual representations, to predict the quality of a document. Experimental results show that our joint model outperforms the visual-only model in all cases, and the text-only model on Wikipedia and two subsets of arXiv. These results underline the feasibility of assessing document quality via visual features, and the complementarity of visual and textual document representations for quality assessment."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a"
]
},
{
"annotation_id": [
"3422ddce21d18df60177fea06b9eba6bdea5dbb8",
"4a69f21ca18ea894839b8f6e09455f5a87c07a39"
],
"answer": [
{
"evidence": [
"The Wikipedia dataset consists of articles from English Wikipedia, with quality class labels assigned by the Wikipedia community. Wikipedia articles are labelled with one of six quality classes, in descending order of quality: Featured Article (“FA”), Good Article (“GA”), B-class Article (“B”), C-class Article (“C”), Start Article (“Start”), and Stub Article (“Stub”). A description of the criteria associated with the different classes can be found in the Wikipedia grading scheme page. The quality class of a Wikipedia article is assigned by Wikipedia reviewers or any registered user, who can discuss through the article's talk page to reach consensus. We constructed the dataset by first crawling all articles from each quality class repository, e.g., we get FA articles by crawling pages from the FA repository: https://en.wikipedia.org/wiki/Category:Featured_articles. This resulted in around 5K FA, 28K GA, 212K B, 533K C, 2.6M Start, and 3.2M Stub articles."
],
"extractive_spans": [
"English"
],
"free_form_answer": "",
"highlighted_evidence": [
"The Wikipedia dataset consists of articles from English Wikipedia, with quality class labels assigned by the Wikipedia community."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The Wikipedia dataset consists of articles from English Wikipedia, with quality class labels assigned by the Wikipedia community. Wikipedia articles are labelled with one of six quality classes, in descending order of quality: Featured Article (“FA”), Good Article (“GA”), B-class Article (“B”), C-class Article (“C”), Start Article (“Start”), and Stub Article (“Stub”). A description of the criteria associated with the different classes can be found in the Wikipedia grading scheme page. The quality class of a Wikipedia article is assigned by Wikipedia reviewers or any registered user, who can discuss through the article's talk page to reach consensus. We constructed the dataset by first crawling all articles from each quality class repository, e.g., we get FA articles by crawling pages from the FA repository: https://en.wikipedia.org/wiki/Category:Featured_articles. This resulted in around 5K FA, 28K GA, 212K B, 533K C, 2.6M Start, and 3.2M Stub articles.",
"The arXiv dataset BIBREF2 consists of three subsets of academic articles under the arXiv repository of Computer Science (cs), from the three subject areas of: Artificial Intelligence (cs.ai), Computation and Language (cs.cl), and Machine Learning (cs.lg). In line with the original dataset formulation BIBREF2 , a paper is considered to have been accepted (i.e. is positively labeled) if it matches a paper in the DBLP database or is otherwise accepted by any of the following conferences: ACL, EMNLP, NAACL, EACL, TACL, NIPS, ICML, ICLR, or AAAI. Failing this, it is considered to be rejected (noting that some of the papers may not have been submitted to one of these conferences). The median numbers of pages for papers in cs.ai, cs.cl, and cs.lg are 11, 10, and 12, respectively. To make sure each page in the PDF file has the same size in the screenshot, we crop the PDF file of a paper to the first 12; we pad the PDF file with blank pages if a PDF file has less than 12 pages, using the PyPDF2 Python package. We then use ImageMagick to convert the 12-page PDF file to a single 1,000 $\\times $ 2,000 pixel screenshot. Table 2 details this dataset, where the “Accepted” column denotes the percentage of positive instances (accepted papers) in each subset."
],
"extractive_spans": [],
"free_form_answer": "English",
"highlighted_evidence": [
"The Wikipedia dataset consists of articles from English Wikipedia, with quality class labels assigned by the Wikipedia community. ",
"The arXiv dataset BIBREF2 consists of three subsets of academic articles under the arXiv repository of Computer Science (cs), from the three subject areas of: Artificial Intelligence (cs.ai), Computation and Language (cs.cl), and Machine Learning (cs.lg)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe",
"1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a"
]
},
{
"annotation_id": [
"2cf6a4ccaa19d91c5dfe27d0ee203c3da4538346"
],
"answer": [
{
"evidence": [
"We randomly sampled 5,000 articles from each quality class and removed all redirect pages, resulting in a dataset of 29,794 articles. As the wikitext contained in each document contains markup relating to the document category such as {Featured Article} or {geo-stub}, which reveals the label, we remove such information. We additionally randomly partitioned this dataset into training, development, and test splits based on a ratio of 8:1:1. Details of the dataset are summarized in Table 1 .",
"The Wikipedia dataset consists of articles from English Wikipedia, with quality class labels assigned by the Wikipedia community. Wikipedia articles are labelled with one of six quality classes, in descending order of quality: Featured Article (“FA”), Good Article (“GA”), B-class Article (“B”), C-class Article (“C”), Start Article (“Start”), and Stub Article (“Stub”). A description of the criteria associated with the different classes can be found in the Wikipedia grading scheme page. The quality class of a Wikipedia article is assigned by Wikipedia reviewers or any registered user, who can discuss through the article's talk page to reach consensus. We constructed the dataset by first crawling all articles from each quality class repository, e.g., we get FA articles by crawling pages from the FA repository: https://en.wikipedia.org/wiki/Category:Featured_articles. This resulted in around 5K FA, 28K GA, 212K B, 533K C, 2.6M Start, and 3.2M Stub articles."
],
"extractive_spans": [],
"free_form_answer": "a sample of 29,794 wikipedia articles and 2,794 arXiv papers ",
"highlighted_evidence": [
" 29,794",
"The Wikipedia dataset consists of articles from English Wikipedia, with quality class labels assigned by the Wikipedia community. Wikipedia articles are labelled with one of six quality classes, in descending order of quality: Featured Article (“FA”), Good Article (“GA”), B-class Article (“B”), C-class Article (“C”), Start Article (“Start”), and Stub Article (“Stub”).",
"We randomly sampled 5,000 articles from each quality class and removed all redirect pages, resulting in a dataset of 29,794 articles."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a"
]
},
{
"annotation_id": [
"e7d6a6f5b941e1092fd56036d45fd141999db228",
"f07ad9db9b91d3edf6db2365df8902f86d4a1629"
],
"answer": [
{
"evidence": [
"The Wikipedia dataset consists of articles from English Wikipedia, with quality class labels assigned by the Wikipedia community. Wikipedia articles are labelled with one of six quality classes, in descending order of quality: Featured Article (“FA”), Good Article (“GA”), B-class Article (“B”), C-class Article (“C”), Start Article (“Start”), and Stub Article (“Stub”). A description of the criteria associated with the different classes can be found in the Wikipedia grading scheme page. The quality class of a Wikipedia article is assigned by Wikipedia reviewers or any registered user, who can discuss through the article's talk page to reach consensus. We constructed the dataset by first crawling all articles from each quality class repository, e.g., we get FA articles by crawling pages from the FA repository: https://en.wikipedia.org/wiki/Category:Featured_articles. This resulted in around 5K FA, 28K GA, 212K B, 533K C, 2.6M Start, and 3.2M Stub articles.",
"The arXiv dataset BIBREF2 consists of three subsets of academic articles under the arXiv repository of Computer Science (cs), from the three subject areas of: Artificial Intelligence (cs.ai), Computation and Language (cs.cl), and Machine Learning (cs.lg). In line with the original dataset formulation BIBREF2 , a paper is considered to have been accepted (i.e. is positively labeled) if it matches a paper in the DBLP database or is otherwise accepted by any of the following conferences: ACL, EMNLP, NAACL, EACL, TACL, NIPS, ICML, ICLR, or AAAI. Failing this, it is considered to be rejected (noting that some of the papers may not have been submitted to one of these conferences). The median numbers of pages for papers in cs.ai, cs.cl, and cs.lg are 11, 10, and 12, respectively. To make sure each page in the PDF file has the same size in the screenshot, we crop the PDF file of a paper to the first 12; we pad the PDF file with blank pages if a PDF file has less than 12 pages, using the PyPDF2 Python package. We then use ImageMagick to convert the 12-page PDF file to a single 1,000 $\\times $ 2,000 pixel screenshot. Table 2 details this dataset, where the “Accepted” column denotes the percentage of positive instances (accepted papers) in each subset."
],
"extractive_spans": [
"Wikipedia articles are labelled with one of six quality classes, in descending order of quality: Featured Article (“FA”), Good Article (“GA”), B-class Article (“B”), C-class Article (“C”), Start Article (“Start”), and Stub Article (“Stub”).",
"The quality class of a Wikipedia article is assigned by Wikipedia reviewers or any registered user, who can discuss through the article's talk page to reach consensus.",
"The arXiv dataset BIBREF2 consists of three subsets of academic articles under the arXiv repository of Computer Science (cs), from the three subject areas of: Artificial Intelligence (cs.ai), Computation and Language (cs.cl), and Machine Learning (cs.lg). In line with the original dataset formulation BIBREF2 , a paper is considered to have been accepted (i.e. is positively labeled) if it matches a paper in the DBLP database or is otherwise accepted by any of the following conferences: ACL, EMNLP, NAACL, EACL, TACL, NIPS, ICML, ICLR, or AAAI. Failing this, it is considered to be rejected (noting that some of the papers may not have been submitted to one of these conferences). "
],
"free_form_answer": "",
"highlighted_evidence": [
"The quality class of a Wikipedia article is assigned by Wikipedia reviewers or any registered user, who can discuss through the article's talk page to reach consensus.",
"The arXiv dataset BIBREF2 consists of three subsets of academic articles under the arXiv repository of Computer Science (cs), from the three subject areas of: Artificial Intelligence (cs.ai), Computation and Language (cs.cl), and Machine Learning (cs.lg). In line with the original dataset formulation BIBREF2 , a paper is considered to have been accepted (i.e. is positively labeled) if it matches a paper in the DBLP database or is otherwise accepted by any of the following conferences: ACL, EMNLP, NAACL, EACL, TACL, NIPS, ICML, ICLR, or AAAI. Failing this, it is considered to be rejected (noting that some of the papers may not have been submitted to one of these conferences)."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The Wikipedia dataset consists of articles from English Wikipedia, with quality class labels assigned by the Wikipedia community. Wikipedia articles are labelled with one of six quality classes, in descending order of quality: Featured Article (“FA”), Good Article (“GA”), B-class Article (“B”), C-class Article (“C”), Start Article (“Start”), and Stub Article (“Stub”). A description of the criteria associated with the different classes can be found in the Wikipedia grading scheme page. The quality class of a Wikipedia article is assigned by Wikipedia reviewers or any registered user, who can discuss through the article's talk page to reach consensus. We constructed the dataset by first crawling all articles from each quality class repository, e.g., we get FA articles by crawling pages from the FA repository: https://en.wikipedia.org/wiki/Category:Featured_articles. This resulted in around 5K FA, 28K GA, 212K B, 533K C, 2.6M Start, and 3.2M Stub articles.",
"The arXiv dataset BIBREF2 consists of three subsets of academic articles under the arXiv repository of Computer Science (cs), from the three subject areas of: Artificial Intelligence (cs.ai), Computation and Language (cs.cl), and Machine Learning (cs.lg). In line with the original dataset formulation BIBREF2 , a paper is considered to have been accepted (i.e. is positively labeled) if it matches a paper in the DBLP database or is otherwise accepted by any of the following conferences: ACL, EMNLP, NAACL, EACL, TACL, NIPS, ICML, ICLR, or AAAI. Failing this, it is considered to be rejected (noting that some of the papers may not have been submitted to one of these conferences). The median numbers of pages for papers in cs.ai, cs.cl, and cs.lg are 11, 10, and 12, respectively. To make sure each page in the PDF file has the same size in the screenshot, we crop the PDF file of a paper to the first 12; we pad the PDF file with blank pages if a PDF file has less than 12 pages, using the PyPDF2 Python package. We then use ImageMagick to convert the 12-page PDF file to a single 1,000 $\\times $ 2,000 pixel screenshot. Table 2 details this dataset, where the “Accepted” column denotes the percentage of positive instances (accepted papers) in each subset."
],
"extractive_spans": [
"quality class labels assigned by the Wikipedia community",
"a paper is considered to have been accepted (i.e. is positively labeled) if it matches a paper in the DBLP database or is otherwise accepted by any of the following conferences: ACL, EMNLP, NAACL, EACL, TACL, NIPS, ICML, ICLR, or AAAI"
],
"free_form_answer": "",
"highlighted_evidence": [
"The Wikipedia dataset consists of articles from English Wikipedia, with quality class labels assigned by the Wikipedia community. Wikipedia articles are labelled with one of six quality classes, in descending order of quality: Featured Article (“FA”), Good Article (“GA”), B-class Article (“B”), C-class Article (“C”), Start Article (“Start”), and Stub Article (“Stub”).",
"In line with the original dataset formulation BIBREF2 , a paper is considered to have been accepted (i.e. is positively labeled) if it matches a paper in the DBLP database or is otherwise accepted by any of the following conferences: ACL, EMNLP, NAACL, EACL, TACL, NIPS, ICML, ICLR, or AAAI. Failing this, it is considered to be rejected (noting that some of the papers may not have been submitted to one of these conferences)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a",
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity",
"infinity",
"infinity",
"infinity",
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no",
"no",
"no",
"no",
"no"
],
"question": [
"Which fonts are the best indicators of high quality?",
"What kind of model do they use?",
"Did they release their data set of academic papers?",
"Do the methods that work best on academic papers also work best on Wikipedia?",
"What is their system's absolute accuracy?",
"Which is more useful, visual or textual features?",
"Which languages do they use?",
"How large is their data set?",
"Where do they get their ground truth quality judgments?"
],
"question_id": [
"e438445cf823893c841b2bc26cdce32ccc3f5cbe",
"12f7fac818f0006cf33269c9eafd41bbb8979a48",
"d5a8fd8bb48dd1f75927e874bdea582b4732a0cd",
"1097768b89f8bd28d6ef6443c94feb04c1a1318e",
"fc1679c714eab822431bbe96f0e9cf4079cd8b8d",
"23e2971c962bb6486bc0a66ff04242170dd22a1d",
"c9bc6f53b941863e801280343afa14248521ce43",
"07b70b2b799b9efa630e8737df8b1dd1284f032c",
"71a0c4f19be4ce1b1bae58a6e8f2a586e125d074"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"wikipedia",
"wikipedia",
"wikipedia",
"wikipedia",
"wikipedia",
"wikipedia",
"wikipedia",
"wikipedia",
"wikipedia"
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar",
"familiar",
"familiar",
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 1: Visual renderings of two exampleWikipedia documents with different quality labels (not intended to be readable).",
"Figure 2: Heatmap overlapped onto screenshots of FA and Stub. Best viewed in color.",
"Table 1: Experimental results. The best result for each dataset is indicated in bold, and marked with “†” if it is significantly higher than the second best result (based on a one-tailed Wilcoxon signed-rank test; p < 0.05). The results of Benchmark on Peer Review are from the original paper, where the standard deviation values were not reported.",
"Table 2: Confusion matrix of the Joint model onWikipedia. Rows are the actual quality classes and columns are the predicted quality classes. The gray cells are correct predictions."
],
"file": [
"1-Figure1-1.png",
"3-Figure2-1.png",
"4-Table1-1.png",
"4-Table2-1.png"
]
} | [
"What is their system's absolute accuracy?",
"Which is more useful, visual or textual features?",
"Which languages do they use?",
"How large is their data set?"
] | [
[
"1901.01010-4-Table1-1.png"
],
[
"1901.01010-Conclusions-0"
],
[
"1901.01010-Datasets-3",
"1901.01010-Datasets-0"
],
[
"1901.01010-Datasets-1",
"1901.01010-Datasets-0"
]
] | [
"59.4% on wikipedia dataset, 93.4% on peer-reviewed archive AI papers, 77.1% on peer-reviewed archive Computation and Language papers, and 79.9% on peer-reviewed archive Machine Learning papers",
"It depends on the dataset. Experimental results over two datasets reveal that textual and visual features are complementary. ",
"English",
"a sample of 29,794 wikipedia articles and 2,794 arXiv papers "
] | 214 |
2002.01207 | Arabic Diacritic Recovery Using a Feature-Rich biLSTM Model | Diacritics (short vowels) are typically omitted when writing Arabic text, and readers have to reintroduce them to correctly pronounce words. There are two types of Arabic diacritics: the first are core-word diacritics (CW), which specify the lexical selection, and the second are case endings (CE), which typically appear at the end of the word stem and generally specify their syntactic roles. Recovering CEs is relatively harder than recovering core-word diacritics due to inter-word dependencies, which are often distant. In this paper, we use a feature-rich recurrent neural network model that uses a variety of linguistic and surface-level features to recover both core word diacritics and case endings. Our model surpasses all previous state-of-the-art systems with a CW error rate (CWER) of 2.86\% and a CE error rate (CEER) of 3.7% for Modern Standard Arabic (MSA) and CWER of 2.2% and CEER of 2.5% for Classical Arabic (CA). When combining diacritized word cores with case endings, the resultant word error rate is 6.0% and 4.3% for MSA and CA respectively. This highlights the effectiveness of feature engineering for such deep neural models. | {
"paragraphs": [
[
"Modern Standard Arabic (MSA) and Classical Arabic (CA) have two types of vowels, namely long vowels, which are explicitly written, and short vowels, aka diacritics, which are typically omitted in writing but are reintroduced by readers to properly pronounce words. Since diacritics disambiguate the sense of the words in context and their syntactic roles in sentences, automatic diacritic recovery is essential for applications such as text-to-speech and educational tools for language learners, who may not know how to properly verbalize words. Diacritics have two types, namely: core-word (CW) diacritics, which are internal to words and specify lexical selection; and case-endings (CE), which appear on the last letter of word stems, typically specifying their syntactic role. For example, the word “ktb” (كتب>) can have multiple diacritized forms such as “katab” (كَتَب> – meaning “he wrote”) “kutub” (كُتُب> – “books”). While “katab” can only assume one CE, namely “fatHa” (“a”), “kutub” can accept the CEs: “damma” (“u”) (nominal – ex. subject), “a” (accusative – ex. object), “kasra” (“i”) (genitive – ex. PP predicate), or their nunations. There are 14 diacritic combinations. When used as CEs, they typically convey specific syntactic information, namely: fatHa “a” for accusative nouns, past verbs and subjunctive present verbs; kasra “i” for genitive nouns; damma “u” for nominative nouns and indicative present verbs; sukun “o” for jussive present verbs and imperative verbs. FatHa, kasra and damma can be preceded by shadda “$\\sim $” for gemination (consonant doubling) and/or converted to nunation forms following some grammar rules. In addition, according to Arabic orthography and phonology, some words take a virtual (null) “#” marker when they end with certain characters (ex: long vowels). This applies also to all non-Arabic words (ex: punctuation, digits, Latin words, etc.). Generally, function words, adverbs and foreign named entities (NEs) have set CEs (sukun, fatHa or virtual). Similar to other Semitic languages, Arabic allows flexible Verb-Subject-Object as well as Verb-Object-Subject constructs BIBREF1. Such flexibility creates inherent ambiguity, which is resolved by diacritics as in “r$>$Y Emr Ely” (رأى عمر علي> Omar saw Ali/Ali saw Omar). In the absence of diacritics it is not clear who saw whom. Similarly, in the sub-sentence “kAn Alm&tmr AltAsE” (كان المؤتمر التاسع>), if the last word, is a predicate of the verb “kAn”, then the sentence would mean “this conference was the ninth” and would receive a fatHa (a) as a case ending. Conversely, if it was an adjective to the “conference”, then the sentence would mean “the ninth conference was ...” and would receive a damma (u) as a case ending. Thus, a consideration of context is required for proper disambiguation. Due to the inter-word dependence of CEs, they are typically harder to predict compared to core-word diacritics BIBREF2, BIBREF3, BIBREF4, BIBREF5, with CEER of state-of-the-art systems being in double digits compared to nearly 3% for word-cores. Since recovering CEs is akin to shallow parsing BIBREF6 and requires morphological and syntactic processing, it is a difficult problem in Arabic NLP. In this paper, we focus on recovering both CW diacritics and CEs. We employ two separate Deep Neural Network (DNN) architectures for recovering both kinds of diacritic types. We use character-level and word-level bidirectional Long-Short Term Memory (biLSTM) based recurrent neural models for CW diacritic and CE recovery respectively. We train models for both Modern Standard Arabic (MSA) and Classical Arabic (CA). For CW diacritics, the model is informed using word segmentation information and a unigram language model. We also employ a unigram language model to perform post correction on the model output. We achieve word error rates for CW diacritics of 2.9% and 2.2% for MSA and CA. The MSA word error rate is 6% lower than the best results in the literature (the RDI diacritizer BIBREF7). The CE model is trained with a rich set of surface, morphological, and syntactic features. The proposed features would aid the biLSTM model in capturing syntactic dependencies indicated by Part-Of-Speech (POS) tags, gender and number features, morphological patterns, and affixes. We show that our model achieves a case ending error rate (CEER) of 3.7% for MSA and 2.5% for CA. For MSA, this CEER is more than 60% lower than other state-of-the-art systems such as Farasa and the RDI diacritizer, which are trained on the same dataset and achieve CEERs of 10.7% and 14.4% respectively. The contributions of this paper are as follows:",
"We employ a character-level RNN model that is informed using word morphological information and a word unigram language model to recover CW diacritics. Our model beats the best state-of-the-art system by 6% for MSA.",
"We introduce a new feature rich RNN-based CE recovery model that achieves errors rates that are 60% lower than the current state-of-the-art for MSA.",
"We explore the effect of different features, which may potentially be exploited for Arabic parsing.",
"We show the effectiveness of our approach for both MSA and CA."
],
[
"Automatic diacritics restoration has been investigated for many different language such as European languages (e.g. Romanian BIBREF8, BIBREF9, French BIBREF10, and Croatian BIBREF11), African languages (e.g. Yorba BIBREF12), Southeast Asian languages (e.g. Vietnamese BIBREF13), Semitic language (e.g. Arabic and Hebrew BIBREF14), and many others BIBREF15. For many languages, diacritic (or accent restoration) is limited to a handful of letters. However, for Semitic languages, diacritic recovery extends to most letters. Many general approaches have been explored for this problem including linguistically motivated rule-based approaches, machine learning approaches, such as Hidden Markov Models (HMM) BIBREF14 and Conditional Random Fields (CRF) BIBREF16, and lately deep learning approaches such as Arabic BIBREF17, BIBREF18, BIBREF19, Slovak BIBREF20, and Yorba BIBREF12. Aside from rule-based approaches BIBREF21, different methods were used to recover diacritics in Arabic text. Using a hidden Markov model (HMM) BIBREF14, BIBREF22 with an input character sequence, the model attempts to find the best state sequence given previous observations. BIBREF14 reported a 14% word error rate (WER) while BIBREF22 achieved a 4.1% diacritic error rate (DER) on the Quran (CA). BIBREF23 combined both morphological, acoustic, and contextual features to build a diacritizer trained on FBIS and LDC CallHome ECA collections. They reported a 9% (DER) without CE, and 28% DER with CE. BIBREF24 employed a cascade of a finite state transducers. The cascade stacked a word language model (LM), a charachter LM, and a morphological model. The model achieved an accuracy of 7.33% WER without CE and and 23.61% WER with CE. BIBREF25 employed a maximum entropy model for sequence classification. The system was trained on the LDC’s Arabic Treebank (ATB) and evaluated on a 600 articles from An-Nahar Newspaper (340K words) and achieved 5.5% DER and 18% WER on words without CE.",
"BIBREF26 used a hybrid approach that utilizes the output of Alkhalil morphological Analyzer BIBREF27 to generate all possible out of context diacritizations of a word. Then, an HMM guesses the correct diacritized form. Similarly, Microsoft Arabic Toolkit Services (ATKS) diacritizer BIBREF28 uses a rule-based morphological analyzer that produces possible analyses and an HMM in conjunction with rules to guess the most likely analysis. They report WER of 11.4% and 4.4% with and without CE. MADAMIRA BIBREF29 uses a combinations of morpho-syntactic features to rank a list of potential analyses provided by the Buckwalter Arabic Morphological Analyzer (BAMA) BIBREF30. An SVM trained on ATB selects the most probable analysis, including the diacritized form. MADAMIRA achieves 19.0% and 6.7% WER with and without CE respectively BIBREF31. Farasa BIBREF31 uses an HMM to guess CW diacritics and an SVM-rank based model trained on morphological and syntactic features to guess CEs. Farasa achieves WER of 12.8% and 3.3% with and without CEs.",
"More recent work employed different neural architectures to model the diacritization problem. BIBREF17 used a biLSTM recurrent neural network trained on the same dataset as BIBREF25. They explored one, two and three BiLSTM layers with 250 nodes in each layers, achieving WER of 9.1% including CE on ATB. Similar architectures were used but achieved lower results BIBREF7, BIBREF32. BIBREF33 provide a comprehensive survey on Arabic diacritization. A more recent survey by BIBREF34 concluded that reported results are often incomparable due to the usage of different test sets. They concluded that a large unigram LM for CW diacritic recovery is competitive with many of the systems in the literature, which prompted us to utilize a unigram language model for post correction. As mentioned earlier, two conclusions can be drawn, namely: restoring CEs is more challenging than CW diacritic restoration; and combining multiple features typically improves CE restoration.",
"In this paper, we expand upon the work in the literature by introducing feature-rich DNN models for restoring both CW and CE diacritics. We compare our models to multiple systems on the same test set. We achieve results that reduce diacritization error rates by more than half compared to the best SOTA systems. We further conduct an ablation study to determine the relative effect of the different features. As for Arabic, it is a Semitic language with derivational morphology. Arabic nouns, adjectives, adverbs, and verbs are typically derived from a closed set of 10,000 roots of length 3, 4, or rarely 5. Arabic nouns and verbs are derived from roots by applying templates to the roots to generate stems. Such templates may carry information that indicate morphological features of words such POS tag, gender, and number. For example, given a 3-letter root with 3 consonants CCC, a valid template may be CwACC , where the infix “wA” (وا>) is inserted, this template typically indicates an Arabic broken, or irregular, plural template for a noun of template CACC or CACCp if masculine or feminine respectively. Further, stems may accept prefixes and/or suffixes to form words. Prefixes include coordinating conjunctions, determiner, and prepositions, and suffixes include attached pronouns and gender and number markers."
],
[
"For MSA, we acquired the diacritized corpus that was used to train the RDI BIBREF7 diacritizer and the Farasa diacritizer BIBREF31. The corpus contains 9.7M tokens with approximately 194K unique surface forms (excluding numbers and punctuation marks). The corpus covers multiple genres such as politics and sports and is a mix of MSA and CA. This corpus is considerably larger than the Arabic Treebank BIBREF35 and is more consistent in its diacritization. For testing, we used the freely available WikiNews test set BIBREF31, which is composed of 70 MSA WikiNews articles (18,300 tokens) and evenly covers a variety of genres including politics, economics, health, science and technology, sports, arts and culture.",
"For CA, we obtained a large collection of fully diacritized classical texts (2.7M tokens) from a book publisher, and we held-out a small subset of 5,000 sentences (approximately 400k words) for testing. Then, we used the remaining sentences to train the CA models."
],
[
"Arabic words are typically derived from a limited set of roots by fitting them into so-called stem-templates (producing stems) and may accept a variety of prefixes and suffixes such as prepositions, determiners, and pronouns (producing words). Word stems specify the lexical selection and are typically unaffected by the attached affixes. We used 4 feature types, namely:",
"CHAR: the characters.",
"SEG: the position of the character in a word segment. For example, given the word “wAlktAb” (والكتاب> and the book/writers), which is composed of 3 segments “w+Al+ktAb” (و+ال+كتاب>). Letters were marked as “B” if they begin a segment, “M” if they are in the middle of a segment, “E” if they end a segment, and “S” if they are single letter segments. So for “w+Al+ktAb”, the corresponding character positions are “S+BE+BMME”. We used Farasa to perform segmentation, which has a reported segmentation accuracy of 99% on the WikiNews dataset BIBREF36.",
"PRIOR: diacritics seen in the training set per segment. Since we used a character level model, this feature informed the model with word level information. For example, the word “ktAb” (كتاب>) was observed to have two diacritized forms in the training set, namely “kitaAb” (كِتَاب> – book) and “kut$\\sim $aAb” (كُتَّاب> – writers). The first letter in the word (“k”) accepted the diacritics “i” and “u”. Thus given a binary vector representing whether a character is allowed to assume any of the eight primitive Arabic diacritic marks (a, i, u, o, K, N, F, and $\\sim $ in order), the first letter would be given the following vector “01100000”. If a word segment was never observed during training, the vector for all letters therein would be set to 11111111. This feature borrows information from HMM models, which have been fairly successful in diacritizing word cores.",
"CASE: whether the letter expects a core word diacritic or a case ending. Case endings are placed on only one letter in a word, which may or may not be the last letter in the word. This is a binary feature."
],
[
"Using a DNN model, particularly with a biLSTM BIBREF37, is advantageous because the model automatically explores the space of feature combinations and is able to capture distant dependencies. A number of studies have explored various biLSTM architectures BIBREF17, BIBREF7, BIBREF32 including stacked biLSTMs confirming their effectiveness. As shown in Figure FIGREF14, we employed a character-based biLSTM model with associated features for each character. Every input character had an associated list of $m$ features, and we trained randomly initialized embeddings of size 50 for each feature. Then, we concatenated the feature embeddings vectors creating an $m\\times 50$ vector for each character, which was fed into the biLSTM layer of length 100. The output of the biLSTM layer was fed directly into a dense layer of size 100. We used early stopping with patience of 5 epochs, a learning rate of 0.001, a batch size of 256, and an Adamax optimizer. The input was the character sequence in a sentence with words being separated by word boundary markers (WB), and we set the maximum sentence length to 1,250 characters."
],
[
"Table TABREF17 lists the features that we used for CE recovery. We used Farasa to perform segmentation and POS tagging and to determine stem-templates BIBREF31. Farasa has a reported POS accuracy of 96% on the WikiNews dataset BIBREF31. Though the Farasa diacritizer utilizes a combination of some the features presented herein, namely segmentation, POS tagging, and stem templates, Farasa's SVM-ranking approach requires explicit specification of feature combinations (ex. $Prob(CE\\Vert current\\_word, prev\\_word, prev\\_CE)$). Manual exploration of the feature space is undesirable, and ideally we would want our learning algorithm to do so automatically. The flexibility of the DNN model allowed us to include many more surface level features such as affixes, leading and trailing characters in words and stems, and the presence of words in large gazetteers of named entities. As we show later, these additional features significantly lowered CEER."
],
[
"Figure FIGREF19 shows the architecture of our DNN algorithm. Every input word had an associated list of $n$ features, and we trained randomly initialized embeddings of size 100 for each feature. Then, we concatenated the feature embeddings vectors creating an $n\\times 100$ vector for each word. We fed these vectors into a biLSTM layer of 100 dimensions after applying a dropout of 75%, where dropout behaves like a regularlizer to avoid overfitting BIBREF40. We conducted side experiments with lower dropout rates, but the higher dropout rate worked best. The output of the biLSTM layer was fed into a 100 dimensional dense layer with 15% dropout and softmax activation. We conducted side experiments where we added additional biLSTM layers and replaced softmax with a conditional random field layer, but we did not observe improvements. Thus, we opted for a simpler model. We used a validation set to determine optimal parameters such as dropout rate. Again, we used the “Adamax” optimizer with categorical cross entropy loss and a learning rate of 0.001. We also applied early stopping with patience of up to 5 consecutive epochs without improvement."
],
[
"For all the experiments conducted herein, we used the Keras toolkit BIBREF41 with a TensorFlow backend BIBREF42. We used the entirety of the training set as input, and we instructed Keras to use 5% of the data for tuning (validation). We included the CASE feature, which specifies whether the letter accepts a normal diacritic or case ending, in all our setups. We conducted multiple experiment using different features, namely:",
"CHAR: This is our baseline setup where we only used the characters as features.",
"CHAR+SEG: This takes the characters and their segmentation information as features.",
"CHAR+PRIOR: This takes the characters and their the observed diacritized forms in the training set.",
"All: This setup includes all the features.",
"We also optionally employed post correction. For words that were seen in training, if the model produced a diacritized form that was not seen in the training data, we assumed it was an error and replaced it with the most frequently observed diacritized form (using a unigram language model). We report two error rates, namely WER (at word level) and DER (at character level). We used relaxed scoring where we assumed an empty case to be equivalent to sukun, and we removed default diacritics – fatHa followed by alef, kasra followed by ya, and damma followed by wa. Using such scoring would allow to compare to other systems in the literature that may use different diacritization conventions."
],
[
"For testing, we used the aforementioned WikiNews dataset to test the MSA diacritizer and the held-out 5,000 sentences for CA. Table TABREF27 shows WER and DER results using different features with and without post correction."
],
[
"For MSA, though the CHAR+PRIOR feature led to worse results than using CHAR alone, the results show that combining all the features achieved the best results. Moreover, post correction improved results overall. We compare our results to five other systems, namely Farasa BIBREF31, MADAMIRA BIBREF29, RDI (Rashwan et al., 2015), MIT (Belinkow and Glass, 2015), and Microsoft ATKS BIBREF28. Table TABREF34 compares our system with others in the aforementioned systems. As the results show, our results beat the current state-of-the-art.",
"For error analysis, we analyzed all the errors (527 errors). The errors types along with examples of each are shown in Table TABREF29. The most prominent error type arises from the selection of a valid diacritized form that does not match the context (40.8%). Perhaps, including POS tags as a feature or augmenting the PRIOR feature with POS tag information and a bigram language model may reduce the error rate further. The second most common error is due to transliterated foreign words including foreign named entities (23.5%). Such words were not observed during training. Further, Arabic Named entities account for 10.6% of the errors, where they were either not seen in training or they share identical non-diacritized forms with other words. Perhaps, building larger gazetteers of diacritized named entities may resolve NE related errors. In 10.8% of the cases, the diacritizer produced in completely incorrect diacritized forms. In some the cases (9.1%), though the diacritizer produced a form that is different from the reference, both forms were in fact correct. Most of these cases were due to variations in diacritization conventions (ex. “bare alef” (A) at start of a word receiving a diacritic or not). Other cases include foreign words and some words where both diacritized forms are equally valid."
],
[
"For CA results, the CHAR+SEG and CHAR+PRIOR performed better than using characters alone with CHAR+PRIOR performing better than CHAR+SEG. As in the case with MSA, combining all the features led to the best results. Post correction had a significantly larger positive impact on results compared to what we observed for MSA. This may indicate that we need a larger training set. The best WER that we achieved for CW diacritics with post corrections is 2.2%. Since we did not have access to any publicly available system that is tuned for CA, we compared our best system to using our best MSA system to diacritize the CA test set, and the MSA diacritizer produced significantly lower results with a WER of 8.5% (see Table TABREF34). This highlights the large difference between MSA and CA and the need for systems that are specifically tuned for both.",
"We randomly selected and analyzed 500 errors (5.2% of the errors). The errors types along with examples of each are shown in Table TABREF33. The two most common errors involve the system producing completely correct diacritized forms (38.8%) or correct forms that don't match the context (31.4%). The relatively higher percentage of completely incorrect guesses, compared to MSA, may point to the higher lexical diversity of classical Arabic. As for MSA, we suspect that adding additional POS information and employing a word bigram to constrain the PRIOR feature may help reduce selection errors. Another prominent error is related to the diacritics that appear on attached suffixes, particularly pronouns, which depend on the choice of case ending (13.2%). Errors due to named entities are slightly fewer than those seen for MSA (8.8%). A noticeable number of mismatches between the guess and the reference are due to partial diacritization of the reference (4.4%). We plan to conduct an extra round of checks on the test set."
],
[
"We conducted multiple experiments to determine the relative effect of the different features as follows:",
"word: This is our baseline setup, which uses word surface forms only.",
"word-surface: This setup uses the word surface forms, stems, prefixes, and suffixes (including noun suffixes). This simulates the case when no POS tagging information is available.",
"word-POS: This includes the word surface form and POS information, including gender and number of stems, prefixes, and suffixes.",
"word-morph: This includes words and their stem templates to capture morphological patterns.",
"word-surface-POS-morph: This setup uses all the features (surface, POS, and morphological).",
"all-misc: This uses all features plus word and stem leading and trailing character unigrams and bigrams in addition to sukun words and named entities.",
"For testing MSA, we used the aforementioned WikiNews dataset. Again, we compared our results to five other systems, namely Farasa BIBREF31, MADAMIRA BIBREF29, RDI (Rashwan et al., 2015), MIT (Belinkow and Glass, 2015), and Microsoft ATKS BIBREF28. For CA testing, we used the 5,000 sentences that we set aside. Again, we compared to our best MSA system."
],
[
"Table TABREF45 lists the results of our setups compared to other systems."
],
[
"As the results show, our baseline DNN system outperforms all state-of-the-art systems. Further, adding more features yielded better results overall. Surface-level features resulted in the most gain, followed by POS tags, and lastly stem templates. Further, adding head and tail characters along with a list of sukun words and named entities led to further improvement. Our proposed feature-rich system has a CEER that is approximately 61% lower than any of the state-of-the-art systems.",
"Figure FIGREF49 shows CE distribution and prediction accuracy. For the four basic markers kasra, fatHa, damma and sukun, which appear 27%, 14%, 9% and 10% respectively, the system has CEER of $\\sim $1% for each. Detecting the virtual CE mark is a fairly easy task. All other CE markers represent 13% with almost negligible errors. Table TABREF30 lists a thorough breakdown of all errors accounting for at 1% of the errors along with the most common reasons of the errors and examples illustrating these reasons. For example, the most common error type involves guessing a fatHa (a) instead of damma (u) or vice versa (19.3%). The most common reasons for this error type, based on inspecting the errors, were due to: POS errors (ex. a word is tagged as a verb instead of a noun); and a noun is treated as a subject instead of an object or vice versa. The table details the rest of the error types. Overall, some of the errors are potentially fixable using better POS tagging, improved detection of non-Arabized foreign names, and detection of indeclinability. However, some errors are more difficult and require greater understanding of semantics such as improper attachment, incorrect idafa, and confusion between subject and object. Perhaps, such semantic errors can be resolved using parsing."
],
[
"The results show that the POS tagging features led to the most improvements followed by the surface features. Combining all features led to the best results with WER of 2.5%. As we saw for CW diacritics, using our best MSA system to diacritize CA led to significantly lower results with CEER of 8.9%.",
"Figure FIGREF50 shows CE distribution and prediction accuracy. For the four basic markers fatHa, kasra, damma and sukun, which appear 18%, 14%, 13% and 8% respectively, the system has CEER $\\sim $0.5% for each. Again, detecting the virtual CE mark was a fairly easy task. All other CE markers representing 20% have negligible errors.",
"Table TABREF31 lists all the error types, which account for at least 1% of the errors, along with their most common causes and explanatory examples. The error types are similar to those observed for MSA. Some errors are more syntactic and morphological in nature and can be addressed using better POS tagging and identification of indeclinability, particularly as they relate to named entities and nouns with feminine markers. Other errors such as incorrect attachment, incorrect idafa, false subject, and confusion between subject and object can perhaps benefit from the use of parsing. As with the core-word errors for CA, the reference has some errors (ex. {a,i,o} $\\Rightarrow $ #), and extra rounds of reviews of the reference are in order."
],
[
"Table TABREF48 compares the full word diacritization (CW+CE) of our best setup to other systems in the literature. As the results show for MSA, our overall diacritization WER is 6.0% while the state of the art system has a WER of 12.2%. As for CA, our best system produced an error rate of 4.3%, which is significantly better than using our best MSA system to diacritize CA."
],
[
"In this paper, we presented a feature-rich DNN approach for MSA CW and CE recovery that produces a word level error for MSA of 6.0%, which is more than 50% lower than state-of-the-art systems (6.0% compared to 12.2%) and word error rate of 4.3% for CA. Specifically, we used biLSTM-based model with a variety of surface, morphological, and syntactic features. Reliable NLP tools may be required to generate some of these features, and such tools may not be readily available for other language varieties, such as dialectal Arabic. However, we showed the efficacy of different varieties of features, such as surface level-features, and they can help improve diacritization individually. Further, though some errors may be overcome using improved NLP tools (ex. better POS tagging), semantic errors, such incorrect attchment, are more difficult to fix. Perhaps, using dependency parsing may help overcome some semantic errors. As for feature engineering, the broad categories of features, such as surface, syntactic, and morphological features, may likely carry-over to other languages, language-specific feature engineering may be require to handle the specificity of each language. Lastly, since multiple diacritization conventions may exist, as in the case of Arabic, adopting one convention consistently is important for training a good system and for properly testing it. Though we have mostly achieved this for MSA, the CA dataset requires more checks to insure greater consistency.",
"For future work, we want to explore the effectiveness of augmenting our CW model with POS tagging information and a bigram language model. Further, we plan to create a multi reference diacritization test set to handle words that have multiple valid diacritized forms. For CE, we want to examine the effectiveness of the proposed features for Arabic parsing. We plan to explore: character-level convolutional neural networks that may capture sub-word morphological features; pre-trained embeddings; and attention mechanisms to focus on salient features. We also plan to explore joint modeling for both core word and case ending diacritics."
]
],
"section_name": [
"Introduction",
"Background",
"Our Diacritizer ::: Training and Test Corpora",
"Our Diacritizer ::: Core Word Diacritization ::: Features.",
"Our Diacritizer ::: Core Word Diacritization ::: DNN Model.",
"Our Diacritizer ::: Case Ending Diacritization ::: Features.",
"Our Diacritizer ::: Case Ending Diacritization ::: DNN Model",
"Experiments and Results ::: Core Word ::: Experimental Setup",
"Experiments and Results ::: Core Word ::: Results and Error analysis",
"Experiments and Results ::: Core Word ::: Results and Error analysis ::: MSA Results:",
"Experiments and Results ::: Core Word ::: Results and Error analysis ::: CA Results:",
"Experiments and Results ::: Case Ending ::: Experimental Setup",
"Experiments and Results ::: Case Ending ::: Results and Error Analysis",
"Experiments and Results ::: Case Ending ::: Results and Error Analysis ::: MSA Results:",
"Experiments and Results ::: Case Ending ::: Results and Error Analysis ::: CA Results:",
"Experiments and Results ::: Full Diacritization Results",
"Conclusion and Future Work"
]
} | {
"answers": [
{
"annotation_id": [
"9335a893cabbc1a7d7affacbb6ac7872fadfcb2f",
"dd2ec75a66e146fd8a479474a2d5bcd508c8c078"
],
"answer": [
{
"evidence": [
"For MSA, we acquired the diacritized corpus that was used to train the RDI BIBREF7 diacritizer and the Farasa diacritizer BIBREF31. The corpus contains 9.7M tokens with approximately 194K unique surface forms (excluding numbers and punctuation marks). The corpus covers multiple genres such as politics and sports and is a mix of MSA and CA. This corpus is considerably larger than the Arabic Treebank BIBREF35 and is more consistent in its diacritization. For testing, we used the freely available WikiNews test set BIBREF31, which is composed of 70 MSA WikiNews articles (18,300 tokens) and evenly covers a variety of genres including politics, economics, health, science and technology, sports, arts and culture.",
"For CA, we obtained a large collection of fully diacritized classical texts (2.7M tokens) from a book publisher, and we held-out a small subset of 5,000 sentences (approximately 400k words) for testing. Then, we used the remaining sentences to train the CA models."
],
"extractive_spans": [
"diacritized corpus that was used to train the RDI BIBREF7 diacritizer and the Farasa diacritizer BIBREF31",
"WikiNews test set BIBREF31",
" large collection of fully diacritized classical texts (2.7M tokens) from a book publisher"
],
"free_form_answer": "",
"highlighted_evidence": [
"For MSA, we acquired the diacritized corpus that was used to train the RDI BIBREF7 diacritizer and the Farasa diacritizer BIBREF31. The corpus contains 9.7M tokens with approximately 194K unique surface forms (excluding numbers and punctuation marks).",
"For testing, we used the freely available WikiNews test set BIBREF31, which is composed of 70 MSA WikiNews articles (18,300 tokens) and evenly covers a variety of genres including politics, economics, health, science and technology, sports, arts and culture.",
"For CA, we obtained a large collection of fully diacritized classical texts (2.7M tokens) from a book publisher, and we held-out a small subset of 5,000 sentences (approximately 400k words) for testing."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Modern Standard Arabic (MSA) and Classical Arabic (CA) have two types of vowels, namely long vowels, which are explicitly written, and short vowels, aka diacritics, which are typically omitted in writing but are reintroduced by readers to properly pronounce words. Since diacritics disambiguate the sense of the words in context and their syntactic roles in sentences, automatic diacritic recovery is essential for applications such as text-to-speech and educational tools for language learners, who may not know how to properly verbalize words. Diacritics have two types, namely: core-word (CW) diacritics, which are internal to words and specify lexical selection; and case-endings (CE), which appear on the last letter of word stems, typically specifying their syntactic role. For example, the word “ktb” (كتب>) can have multiple diacritized forms such as “katab” (كَتَب> – meaning “he wrote”) “kutub” (كُتُب> – “books”). While “katab” can only assume one CE, namely “fatHa” (“a”), “kutub” can accept the CEs: “damma” (“u”) (nominal – ex. subject), “a” (accusative – ex. object), “kasra” (“i”) (genitive – ex. PP predicate), or their nunations. There are 14 diacritic combinations. When used as CEs, they typically convey specific syntactic information, namely: fatHa “a” for accusative nouns, past verbs and subjunctive present verbs; kasra “i” for genitive nouns; damma “u” for nominative nouns and indicative present verbs; sukun “o” for jussive present verbs and imperative verbs. FatHa, kasra and damma can be preceded by shadda “$\\sim $” for gemination (consonant doubling) and/or converted to nunation forms following some grammar rules. In addition, according to Arabic orthography and phonology, some words take a virtual (null) “#” marker when they end with certain characters (ex: long vowels). This applies also to all non-Arabic words (ex: punctuation, digits, Latin words, etc.). Generally, function words, adverbs and foreign named entities (NEs) have set CEs (sukun, fatHa or virtual). Similar to other Semitic languages, Arabic allows flexible Verb-Subject-Object as well as Verb-Object-Subject constructs BIBREF1. Such flexibility creates inherent ambiguity, which is resolved by diacritics as in “r$>$Y Emr Ely” (رأى عمر علي> Omar saw Ali/Ali saw Omar). In the absence of diacritics it is not clear who saw whom. Similarly, in the sub-sentence “kAn Alm&tmr AltAsE” (كان المؤتمر التاسع>), if the last word, is a predicate of the verb “kAn”, then the sentence would mean “this conference was the ninth” and would receive a fatHa (a) as a case ending. Conversely, if it was an adjective to the “conference”, then the sentence would mean “the ninth conference was ...” and would receive a damma (u) as a case ending. Thus, a consideration of context is required for proper disambiguation. Due to the inter-word dependence of CEs, they are typically harder to predict compared to core-word diacritics BIBREF2, BIBREF3, BIBREF4, BIBREF5, with CEER of state-of-the-art systems being in double digits compared to nearly 3% for word-cores. Since recovering CEs is akin to shallow parsing BIBREF6 and requires morphological and syntactic processing, it is a difficult problem in Arabic NLP. In this paper, we focus on recovering both CW diacritics and CEs. We employ two separate Deep Neural Network (DNN) architectures for recovering both kinds of diacritic types. We use character-level and word-level bidirectional Long-Short Term Memory (biLSTM) based recurrent neural models for CW diacritic and CE recovery respectively. We train models for both Modern Standard Arabic (MSA) and Classical Arabic (CA). For CW diacritics, the model is informed using word segmentation information and a unigram language model. We also employ a unigram language model to perform post correction on the model output. We achieve word error rates for CW diacritics of 2.9% and 2.2% for MSA and CA. The MSA word error rate is 6% lower than the best results in the literature (the RDI diacritizer BIBREF7). The CE model is trained with a rich set of surface, morphological, and syntactic features. The proposed features would aid the biLSTM model in capturing syntactic dependencies indicated by Part-Of-Speech (POS) tags, gender and number features, morphological patterns, and affixes. We show that our model achieves a case ending error rate (CEER) of 3.7% for MSA and 2.5% for CA. For MSA, this CEER is more than 60% lower than other state-of-the-art systems such as Farasa and the RDI diacritizer, which are trained on the same dataset and achieve CEERs of 10.7% and 14.4% respectively. The contributions of this paper are as follows:",
"For MSA, we acquired the diacritized corpus that was used to train the RDI BIBREF7 diacritizer and the Farasa diacritizer BIBREF31. The corpus contains 9.7M tokens with approximately 194K unique surface forms (excluding numbers and punctuation marks). The corpus covers multiple genres such as politics and sports and is a mix of MSA and CA. This corpus is considerably larger than the Arabic Treebank BIBREF35 and is more consistent in its diacritization. For testing, we used the freely available WikiNews test set BIBREF31, which is composed of 70 MSA WikiNews articles (18,300 tokens) and evenly covers a variety of genres including politics, economics, health, science and technology, sports, arts and culture.",
"For CA, we obtained a large collection of fully diacritized classical texts (2.7M tokens) from a book publisher, and we held-out a small subset of 5,000 sentences (approximately 400k words) for testing. Then, we used the remaining sentences to train the CA models."
],
"extractive_spans": [
"the diacritized corpus that was used to train the RDI BIBREF7 diacritizer ",
"WikiNews ",
"a large collection of fully diacritized classical texts"
],
"free_form_answer": "",
"highlighted_evidence": [
"Modern Standard Arabic (MSA) and Classical Arabic (CA) have two types of vowels, namely long vowels, which are explicitly written, and short vowels, aka diacritics, which are typically omitted in writing but are reintroduced by readers to properly pronounce words. ",
"For MSA, we acquired the diacritized corpus that was used to train the RDI BIBREF7 diacritizer and the Farasa diacritizer BIBREF31. ",
"For testing, we used the freely available WikiNews test set BIBREF31, which is composed of 70 MSA WikiNews articles (18,300 tokens) and evenly covers a variety of genres including politics, economics, health, science and technology, sports, arts and culture.",
"For CA, we obtained a large collection of fully diacritized classical texts (2.7M tokens) from a book publisher, and we held-out a small subset of 5,000 sentences (approximately 400k words) for testing. Then, we used the remaining sentences to train the CA models."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
]
},
{
"annotation_id": [
"016255680df53e86881a4ab613f61317fafa23e4",
"5f968409e50cc4177a19fbf299c07e61faf524bb"
],
"answer": [
{
"evidence": [
"Modern Standard Arabic (MSA) and Classical Arabic (CA) have two types of vowels, namely long vowels, which are explicitly written, and short vowels, aka diacritics, which are typically omitted in writing but are reintroduced by readers to properly pronounce words. Since diacritics disambiguate the sense of the words in context and their syntactic roles in sentences, automatic diacritic recovery is essential for applications such as text-to-speech and educational tools for language learners, who may not know how to properly verbalize words. Diacritics have two types, namely: core-word (CW) diacritics, which are internal to words and specify lexical selection; and case-endings (CE), which appear on the last letter of word stems, typically specifying their syntactic role. For example, the word “ktb” (كتب>) can have multiple diacritized forms such as “katab” (كَتَب> – meaning “he wrote”) “kutub” (كُتُب> – “books”). While “katab” can only assume one CE, namely “fatHa” (“a”), “kutub” can accept the CEs: “damma” (“u”) (nominal – ex. subject), “a” (accusative – ex. object), “kasra” (“i”) (genitive – ex. PP predicate), or their nunations. There are 14 diacritic combinations. When used as CEs, they typically convey specific syntactic information, namely: fatHa “a” for accusative nouns, past verbs and subjunctive present verbs; kasra “i” for genitive nouns; damma “u” for nominative nouns and indicative present verbs; sukun “o” for jussive present verbs and imperative verbs. FatHa, kasra and damma can be preceded by shadda “$\\sim $” for gemination (consonant doubling) and/or converted to nunation forms following some grammar rules. In addition, according to Arabic orthography and phonology, some words take a virtual (null) “#” marker when they end with certain characters (ex: long vowels). This applies also to all non-Arabic words (ex: punctuation, digits, Latin words, etc.). Generally, function words, adverbs and foreign named entities (NEs) have set CEs (sukun, fatHa or virtual). Similar to other Semitic languages, Arabic allows flexible Verb-Subject-Object as well as Verb-Object-Subject constructs BIBREF1. Such flexibility creates inherent ambiguity, which is resolved by diacritics as in “r$>$Y Emr Ely” (رأى عمر علي> Omar saw Ali/Ali saw Omar). In the absence of diacritics it is not clear who saw whom. Similarly, in the sub-sentence “kAn Alm&tmr AltAsE” (كان المؤتمر التاسع>), if the last word, is a predicate of the verb “kAn”, then the sentence would mean “this conference was the ninth” and would receive a fatHa (a) as a case ending. Conversely, if it was an adjective to the “conference”, then the sentence would mean “the ninth conference was ...” and would receive a damma (u) as a case ending. Thus, a consideration of context is required for proper disambiguation. Due to the inter-word dependence of CEs, they are typically harder to predict compared to core-word diacritics BIBREF2, BIBREF3, BIBREF4, BIBREF5, with CEER of state-of-the-art systems being in double digits compared to nearly 3% for word-cores. Since recovering CEs is akin to shallow parsing BIBREF6 and requires morphological and syntactic processing, it is a difficult problem in Arabic NLP. In this paper, we focus on recovering both CW diacritics and CEs. We employ two separate Deep Neural Network (DNN) architectures for recovering both kinds of diacritic types. We use character-level and word-level bidirectional Long-Short Term Memory (biLSTM) based recurrent neural models for CW diacritic and CE recovery respectively. We train models for both Modern Standard Arabic (MSA) and Classical Arabic (CA). For CW diacritics, the model is informed using word segmentation information and a unigram language model. We also employ a unigram language model to perform post correction on the model output. We achieve word error rates for CW diacritics of 2.9% and 2.2% for MSA and CA. The MSA word error rate is 6% lower than the best results in the literature (the RDI diacritizer BIBREF7). The CE model is trained with a rich set of surface, morphological, and syntactic features. The proposed features would aid the biLSTM model in capturing syntactic dependencies indicated by Part-Of-Speech (POS) tags, gender and number features, morphological patterns, and affixes. We show that our model achieves a case ending error rate (CEER) of 3.7% for MSA and 2.5% for CA. For MSA, this CEER is more than 60% lower than other state-of-the-art systems such as Farasa and the RDI diacritizer, which are trained on the same dataset and achieve CEERs of 10.7% and 14.4% respectively. The contributions of this paper are as follows:"
],
"extractive_spans": [
"Farasa",
"RDI"
],
"free_form_answer": "",
"highlighted_evidence": [
"Modern Standard Arabic (MSA) and Classical Arabic (CA) have two types of vowels, namely long vowels, which are explicitly written, and short vowels, aka diacritics, which are typically omitted in writing but are reintroduced by readers to properly pronounce words.",
"We show that our model achieves a case ending error rate (CEER) of 3.7% for MSA and 2.5% for CA. For MSA, this CEER is more than 60% lower than other state-of-the-art systems such as Farasa and the RDI diacritizer, which are trained on the same dataset and achieve CEERs of 10.7% and 14.4% respectively. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"For MSA, though the CHAR+PRIOR feature led to worse results than using CHAR alone, the results show that combining all the features achieved the best results. Moreover, post correction improved results overall. We compare our results to five other systems, namely Farasa BIBREF31, MADAMIRA BIBREF29, RDI (Rashwan et al., 2015), MIT (Belinkow and Glass, 2015), and Microsoft ATKS BIBREF28. Table TABREF34 compares our system with others in the aforementioned systems. As the results show, our results beat the current state-of-the-art."
],
"extractive_spans": [
"Farasa BIBREF31",
"MADAMIRA BIBREF29",
"RDI (Rashwan et al., 2015)",
"MIT (Belinkow and Glass, 2015)",
"Microsoft ATKS BIBREF28"
],
"free_form_answer": "",
"highlighted_evidence": [
"We compare our results to five other systems, namely Farasa BIBREF31, MADAMIRA BIBREF29, RDI (Rashwan et al., 2015), MIT (Belinkow and Glass, 2015), and Microsoft ATKS BIBREF28."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"3a3aee7c4712f03df38814f39fdf558de74996e6"
],
"answer": [
{
"evidence": [
"Table TABREF17 lists the features that we used for CE recovery. We used Farasa to perform segmentation and POS tagging and to determine stem-templates BIBREF31. Farasa has a reported POS accuracy of 96% on the WikiNews dataset BIBREF31. Though the Farasa diacritizer utilizes a combination of some the features presented herein, namely segmentation, POS tagging, and stem templates, Farasa's SVM-ranking approach requires explicit specification of feature combinations (ex. $Prob(CE\\Vert current\\_word, prev\\_word, prev\\_CE)$). Manual exploration of the feature space is undesirable, and ideally we would want our learning algorithm to do so automatically. The flexibility of the DNN model allowed us to include many more surface level features such as affixes, leading and trailing characters in words and stems, and the presence of words in large gazetteers of named entities. As we show later, these additional features significantly lowered CEER."
],
"extractive_spans": [
"affixes, leading and trailing characters in words and stems, and the presence of words in large gazetteers of named entities"
],
"free_form_answer": "",
"highlighted_evidence": [
" The flexibility of the DNN model allowed us to include many more surface level features such as affixes, leading and trailing characters in words and stems, and the presence of words in large gazetteers of named entities. As we show later, these additional features significantly lowered CEER."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"ccc415ca4420585c989f8f3d35466de2f4e864ef"
],
"answer": [
{
"evidence": [
"Table TABREF17 lists the features that we used for CE recovery. We used Farasa to perform segmentation and POS tagging and to determine stem-templates BIBREF31. Farasa has a reported POS accuracy of 96% on the WikiNews dataset BIBREF31. Though the Farasa diacritizer utilizes a combination of some the features presented herein, namely segmentation, POS tagging, and stem templates, Farasa's SVM-ranking approach requires explicit specification of feature combinations (ex. $Prob(CE\\Vert current\\_word, prev\\_word, prev\\_CE)$). Manual exploration of the feature space is undesirable, and ideally we would want our learning algorithm to do so automatically. The flexibility of the DNN model allowed us to include many more surface level features such as affixes, leading and trailing characters in words and stems, and the presence of words in large gazetteers of named entities. As we show later, these additional features significantly lowered CEER.",
"FLOAT SELECTED: Table 1. Features with examples and motivation."
],
"extractive_spans": [],
"free_form_answer": "POS, gender/number and stem POS",
"highlighted_evidence": [
"Table TABREF17 lists the features that we used for CE recovery.",
"FLOAT SELECTED: Table 1. Features with examples and motivation."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
""
],
"question": [
"what datasets were used?",
"what are the previous state of the art?",
"what surface-level features are used?",
"what linguistics features are used?"
],
"question_id": [
"2740e3d7d33173664c1c5ab292c7ec75ff6e0802",
"db72a78a7102b5f0e75a4d9e1a06a3c2e7aabb21",
"48bd71477d5f89333fa7ce5c4556e4d950fb16ed",
"76ed74788e3eb3321e646c48ae8bf6cdfe46dca1"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
""
]
} | {
"caption": [
"Fig. 1. DNN model for core word diacritics",
"Table 1. Features with examples and motivation.",
"Fig. 2. DNN case ending model architecture",
"Table 2. Core word diacritization results",
"Table 3. Error analysis: Core word error types for MSA",
"Fig. 3. Case endings distribution and prediction accuracy for MSA",
"Fig. 4. Case endings distribution and prediction accuracy for CA",
"Table 4. MSA case errors accounting from more than 1% of errors",
"Table 5. CA case errors accounting from more than 1% of errors",
"Table 6. Error analysis: Core word error types for CA",
"Table 7. Comparing our system to state-of-the-art systems – Core word diacritics",
"Table 8. MSA Results and comparison to other systems",
"Table 9. Comparison to other systems for full diacritization"
],
"file": [
"7-Figure1-1.png",
"8-Table1-1.png",
"9-Figure2-1.png",
"10-Table2-1.png",
"11-Table3-1.png",
"14-Figure3-1.png",
"15-Figure4-1.png",
"19-Table4-1.png",
"20-Table5-1.png",
"21-Table6-1.png",
"22-Table7-1.png",
"23-Table8-1.png",
"24-Table9-1.png"
]
} | [
"what linguistics features are used?"
] | [
[
"2002.01207-8-Table1-1.png",
"2002.01207-Our Diacritizer ::: Case Ending Diacritization ::: Features.-0"
]
] | [
"POS, gender/number and stem POS"
] | 216 |
1803.05223 | MCScript: A Novel Dataset for Assessing Machine Comprehension Using Script Knowledge | We introduce a large dataset of narrative texts and questions about these texts, intended to be used in a machine comprehension task that requires reasoning using commonsense knowledge. Our dataset complements similar datasets in that we focus on stories about everyday activities, such as going to the movies or working in the garden, and that the questions require commonsense knowledge, or more specifically, script knowledge, to be answered. We show that our mode of data collection via crowdsourcing results in a substantial amount of such inference questions. The dataset forms the basis of a shared task on commonsense and script knowledge organized at SemEval 2018 and provides challenging test cases for the broader natural language understanding community. | {
"paragraphs": [
[
"Ambiguity and implicitness are inherent properties of natural language that cause challenges for computational models of language understanding. In everyday communication, people assume a shared common ground which forms a basis for efficiently resolving ambiguities and for inferring implicit information. Thus, recoverable information is often left unmentioned or underspecified. Such information may include encyclopedic and commonsense knowledge. This work focuses on commonsense knowledge about everyday activities, so-called scripts.",
"This paper introduces a dataset to evaluate natural language understanding approaches with a focus on interpretation processes requiring inference based on commonsense knowledge. In particular, we present MCScript, a dataset for assessing the contribution of script knowledge to machine comprehension. Scripts are sequences of events describing stereotypical human activities (also called scenarios), for example baking a cake or taking a bus BIBREF0 . To illustrate the importance of script knowledge, consider Example ( SECREF1 ):",
"Without using commonsense knowledge, it may be difficult to tell who ate the food: Rachel or the waitress. In contrast, if we utilize commonsense knowledge, in particular, script knowledge about the eating in a restaurant scenario, we can make the following inferences: Rachel is most likely a customer, since she received an order. It is usually the customer, and not the waitress, who eats the ordered food. So She most likely refers to Rachel.",
"Various approaches for script knowledge extraction and processing have been proposed in recent years. However, systems have been evaluated for specific aspects of script knowledge only, such as event ordering BIBREF1 , BIBREF2 , event paraphrasing BIBREF3 , BIBREF4 or event prediction (namely, the narrative cloze task BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 ). These evaluation methods lack a clear connection to real-world tasks. Our MCScript dataset provides an extrinsic evaluation framework, based on text comprehension involving commonsense knowledge. This framework makes it possible to assess system performance in a multiple-choice question answering setting, without imposing any specific structural or methodical requirements.",
"MCScript is a collection of (1) narrative texts, (2) questions of various types referring to these texts, and (3) pairs of answer candidates for each question. It comprises approx. 2,100 texts and a total of approx. 14,000 questions. Answering a substantial subset of questions requires knowledge beyond the facts mentioned in the text, i.e. it requires inference using commonsense knowledge about everyday activities. An example is given in Figure FIGREF2 . For both questions, the correct choice for an answer requires commonsense knowledge about the activity of planting a tree, which goes beyond what is mentioned in the text. Texts, questions, and answers were obtained through crowdsourcing. In order to ensure high quality, we manually validated and filtered the dataset. Due to our design of the data acquisition process, we ended up with a substantial subset of questions that require commonsense inference (27.4%)."
],
[
"Machine comprehension datasets consist of three main components: texts, questions and answers. In this section, we describe our data collection for these 3 components. We first describe a series of pilot studies that we conducted in order to collect commonsense inference questions (Section SECREF4 ). In Section SECREF5 , we discuss the resulting data collection of questions, texts and answers via crowdsourcing on Amazon Mechanical Turk (henceforth MTurk). Section SECREF17 gives information about some necessary postprocessing steps and the dataset validation. Lastly, Section SECREF19 gives statistics about the final dataset."
],
[
"As a starting point for our pilots, we made use of texts from the InScript corpus BIBREF10 , which provides stories centered around everyday situations (see Section SECREF7 ). We conducted three different pilot studies to determine the best way of collecting questions that require inference over commonsense knowledge:",
"The most intuitive way of collecting reading comprehension questions is to show texts to workers and let them formulate questions and answers on the texts, which is what we tried internally in a first pilot. Since our focus is to provide an evaluation framework for inference over commonsense knowledge, we manually assessed the number of questions that indeed require common sense knowledge. We found too many questions and answers collected in this manner to be lexically close to the text.",
"In a second pilot, we investigated the option to take the questions collected for one text and show them as questions for another text of the same scenario. While this method resulted in a larger number of questions that required inference, we found the majority of questions to not make sense at all when paired with another text. Many questions were specific to a text (and not to a scenario), requiring details that could not be answered from other texts. Since the two previous pilot setups resulted in questions that centered around the texts themselves, we decided for a third pilot to not show workers any specific texts at all. Instead, we asked for questions that centered around a specific script scenario (e.g. eating in a restaurant). We found this mode of collection to result in questions that have the right level of specificity for our purposes: namely, questions that are related to a scenario and that can be answered from different texts (about that scenario), but for which a text does not need to provide the answer explicitly.",
"The next section will describe the mode of collection chosen for the final dataset, based on the third pilot, in more detail."
],
[
"As mentioned in the previous section, we decided to base the question collection on script scenarios rather than specific texts. As a starting point for our data collection, we use scenarios from three script data collections BIBREF3 , BIBREF11 , BIBREF12 . Together, these resources contain more than 200 scenarios. To make sure that scenarios have different complexity and content, we selected 80 of them and came up with 20 new scenarios. Together with the 10 scenarios from InScript, we end up with a total of 110 scenarios.",
"For the collection of texts, we followed modiinscript, where workers were asked to write a story about a given activity “as if explaining it to a child”. This results in elaborate and explicit texts that are centered around a single scenario. Consequently, the texts are syntactically simple, facilitating machine comprehension models to focus on semantic challenges and inference. We collected 20 texts for each scenario. Each participant was allowed to write only one story per scenario, but work on as many scenarios as they liked. For each of the 10 scenarios from InScript, we randomly selected 20 existing texts from that resource.",
"For collecting questions, workers were instructed to “imagine they told a story about a certain scenario to a child and want to test if the child understood everything correctly”. This instruction also ensured that questions are linguistically simple, elaborate and explicit. Workers were asked to formulate questions about details of such a situation, i.e. independent of a concrete narrative. This resulted in questions, the answer to which is not literally mentioned in the text. To cover a broad range of question types, we asked participants to write 3 temporal questions (asking about time points and event order), 3 content questions (asking about persons or details in the scenario) and 3 reasoning questions (asking how or why something happened). They were also asked to formulate 6 free questions, which resulted in a total of 15 questions. Asking each worker for a high number of questions enforced that more creative questions were formulated, which go beyond obvious questions for a scenario. Since participants were not shown a concrete story, we asked them to use the neutral pronoun “they” to address the protagonist of the story. We permitted participants to work on as many scenarios as desired and we collected questions from 10 participants per scenario.",
"Our mode of question collection results in questions that are not associated with specific texts. For each text, we collected answers for 15 questions that were randomly selected from the same scenario. Since questions and texts were collected independently, answering a random question is not always possible for a given text. Therefore, we carried out answer collection in two steps. In the first step, we asked participants to assign a category to each text–question pair.",
"We distinguish two categories of answerable questions: The category text-based was assigned to questions that can be answered from the text directly. If the answer could only be inferred by using commonsense knowledge, the category script-based was assigned. Making this distinction is interesting for evaluation purposes, since it enables us to estimate the number of commonsense inference questions. For questions that did not make sense at all given a text, unfitting was assigned. If a question made sense for a text, but it was impossible to find an answer, the label unknown was used.",
"In a second step, we told participants to formulate a plausible correct and a plausible incorrect answer candidate to answerable questions (text-based or script-based). To level out the effort between answerable and non-answerable questions, participants had to write a new question when selecting unknown or unfitting. In order to get reliable judgments about whether or not a question can be answered, we collected data from 5 participants for each question and decided on the final category via majority vote (at least 3 out of 5). Consequently, for each question with a majority vote on either text-based or script-based, there are 3 to 5 correct and incorrect answer candidates, one from each participant who agreed on the category. Questions without a clear majority vote or with ties were not included in the dataset.",
"We performed four post-processing steps on the collected data.",
"We manually filtered out texts that were instructional rather than narrative.",
"All texts, questions and answers were spellchecked by running aSpell and manually inspecting all corrections proposed by the spellchecker.",
"We found that some participants did not use “they” when referring to the protagonist. We identified “I”, “you”, “he”, “she”, “my”, “your”, “his”, “her” and “the person” as most common alternatives and replaced each appearance manually with “they” or “their”, if appropriate.",
"We manually filtered out invalid questions, e.g. questions that are suggestive (“Should you ask an adult before using a knife?”) or that ask for the personal opinion of the reader (“Do you think going to the museum was a good idea?”)."
],
[
"We finalized the dataset by selecting one correct and one incorrect answer for each question–text pair. To increase the proportion of non-trivial inference cases, we chose the candidate with the lowest lexical overlap with the text from the set of correct answer candidates as correct answer. Using this principle also for incorrect answers leads to problems. We found that many incorrect candidates were not plausible answers to a given question. Instead of selecting a candidate based on overlap, we hence decided to rely on majority vote and selected the candidate from the set of incorrect answers that was most often mentioned.",
"For this step, we normalized each candidate by lowercasing, deleting punctuation and stop words (articles, and, to and or), and transforming all number words into digits, using text2num. We merged all answers that were string-identical, contained another answer, or had a Levenshtein distance BIBREF13 of 3 or less to another answer. The “most frequent answer” was then selected based on how many other answers it was merged with. Only if there was no majority, we selected the candidate with the highest overlap with the text as a fallback. Due to annotation mistakes, we found a small number of chosen correct and incorrect answers to be inappropriate, that is, some “correct” answers were actually incorrect and vice versa. Therefore, we manually validated the complete dataset in a final step. We asked annotators to read all texts, questions, and answers, and to mark for each question whether the correct and incorrect answers were appropriate. If an answer was inappropriate or contained any errors, they selected a different answer from the set of collected candidates. For approximately 11.5% of the questions, at least one answer was replaced. 135 questions (approx. 1%) were excluded from the dataset because no appropriate correct or incorrect answer could be found."
],
[
"For all experiments, we admitted only experienced MTurk workers who are based in the US. One HIT consisted of writing one text for the text collection, formulating 15 questions for the question collection, or finding 15 pairs of answers for the answer collection. We paid $0.50 per HIT for the text and question collection, and $0.60 per HIT for the answer collection.",
"More than 2,100 texts were paired with 15 questions each, resulting in a total number of approx. 32,000 annotated questions. For 13% of the questions, the workers did not agree on one of the 4 categories with a 3 out of 5 majority, so we did not include these questions in our dataset.",
"The distribution of category labels on the remaining 87% is shown in Table TABREF10 . 14,074 (52%) questions could be answered. Out of the answerable questions, 10,160 could be answered from the text directly (text-based) and 3,914 questions required the use of commonsense knowledge (script-based). After removing 135 questions during the validation, the final dataset comprises 13,939 questions, 3,827 of which require commonsense knowledge (i.e. 27.4%). This ratio was manually verified based on a random sample of questions.",
"We split the dataset into training (9,731 questions on 1,470 texts), development (1,411 questions on 219 texts), and test set (2,797 questions on 430 texts). Each text appears only in one of the three sets. The complete set of texts for 5 scenarios was held out for the test set. The average text, question, and answer length is 196.0 words, 7.8 words, and 3.6 words, respectively. On average, there are 6.7 questions per text.",
"Figure FIGREF21 shows the distribution of question types in the dataset, which we identified using simple heuristics based on the first words of a question: Yes/no questions were identified as questions starting with an auxiliary or modal verb, all other question types were determined based on the question word.",
"We found that 29% of all questions are yes/no questions. Questions about details of a situation (such as what/ which and who) form the second most frequent question category. Temporal questions (when and how long/often) form approx. 11% of all questions. We leave a more detailed analysis of question types for future work."
],
[
"As can be seen from the data statistics, our mode of collection leads to a substantial proportion of questions that require inference using commonsense knowledge. Still, the dataset contains a large number of questions in which the answer is explicitly contained or implied by the text: Figure FIGREF22 shows passages from an example text of the dataset together with two such questions. For question Q1, the answer is given literally in the text. Answering question Q2 is not as simple; it can be solved, however, via standard semantic relatedness information (chicken and hotdogs are meat; water, soda and juice are drinks).",
"The following cases require commonsense inference to be decided. In all these cases, the answers are not overtly contained nor easily derivable from the respective texts. We do not show the full texts, but only the scenario names for each question.",
"Example UID23 refers to a library setting. Script knowledge helps in assessing that usually, paying is not an event when borrowing a book, which answers the question. Similarly, event information helps in answering the questions in Examples UID24 and UID25 . In Example UID26 , knowledge about the typical role of parents in the preparation of a picnic will enable a plausibility decision. Similarly, in Example UID27 , it is commonsense knowledge that showers usually take a few minutes rather than hours.",
"There are also cases in which the answer can be inferred from the text, but where commonsense knowledge is still beneficial: The text for example UID28 does not contain the information that breakfast is eaten in the morning, but it could still be inferred from many pointers in the text (e.g. phrases such as I woke up), or from commonsense knowledge.",
"These few examples illustrate that our dataset covers questions with a wide spectrum of difficulty, from rather simple questions that can be answered from the text to challenging inference problems."
],
[
"In this section, we assess the performance of baseline models on MCScript, using accuracy as the evaluation measure. We employ models of differing complexity: two unsupervised models using only word information and distributional information, respectively, and two supervised neural models. We assess performance on two dimensions: One, we show how well the models perform on text-based questions as compared to questions that require common sense for finding the correct answer. Two, we evaluate each model for each different question type."
],
[
"We first use a simple word matching baseline, by selecting the answer that has the highest literal overlap with the text. In case of a tie, we randomly select one of the answers.",
"The second baseline is a sliding window approach that looks at windows of INLINEFORM0 tokens on the text. Each text and each answer are represented as a sequence of word embeddings. The embeddings for each window of size INLINEFORM1 and each answer are then averaged to derive window and answer representations, respectively. The answer with the lowest cosine distance to one of the windows of the text is then selected as correct.",
"We employ a simple neural model as a third baseline. In this model, each text, question, and answer is represented by a vector. For a given sequence of words INLINEFORM0 , we compute this representation by averaging over the components of the word embeddings INLINEFORM1 that correspond to a word INLINEFORM2 , and then apply a linear transformation using a weight matrix. This procedure is applied to each answer INLINEFORM3 to derive an answer representation INLINEFORM4 . The representation of a text INLINEFORM5 and of a question INLINEFORM6 are computed in the same way. We use different weight matrices for INLINEFORM7 , INLINEFORM8 and INLINEFORM9 , respectively. A combined representation INLINEFORM10 for the text–question pair is then constructed using a bilinear transformation matrix INLINEFORM11 : DISPLAYFORM0 ",
"We compute a score for each answer by using the dot product and pass the scores for both answers through a softmax layer for prediction. The probability INLINEFORM0 for an answer INLINEFORM1 to be correct is thus defined as: DISPLAYFORM0 ",
"The attentive reader is a well-established machine comprehension model that reaches good performance e.g. on the CNN/Daily Mail corpus BIBREF14 , BIBREF15 . We use the model formulation by chen2016thorough and lai2017race, who employ bilinear weight functions to compute both attention and answer-text fit. Bi-directional GRUs are used to encode questions, texts and answers into hidden representations. For a question INLINEFORM0 and an answer INLINEFORM1 , the last state of the GRUs, INLINEFORM2 and INLINEFORM3 , are used as representations, while the text is encoded as a sequence of hidden states INLINEFORM4 . We then compute an attention score INLINEFORM5 for each hidden state INLINEFORM6 using the question representation INLINEFORM7 , a weight matrix INLINEFORM8 , and an attention bias INLINEFORM9 . Last, a text representation INLINEFORM10 is computed as a weighted average of the hidden representations: DISPLAYFORM0 ",
"The probability INLINEFORM0 of answer INLINEFORM1 being correct is then predicted using another bilinear weight matrix INLINEFORM2 , followed by an application of the softmax function over both answer options for the question: DISPLAYFORM0 "
],
[
"Texts, questions and answers were tokenized using NLTK and lowercased. We used 100-dimensional GloVe vectors BIBREF16 to embed each token. For the neural models, the embeddings are used to initialize the token representations, and are refined during training. For the sliding similarity window approach, we set INLINEFORM0 .",
"The vocabulary of the neural models was extracted from training and development data. For optimizing the bilinear model and the attentive reader, we used vanilla stochastic gradient descent with gradient clipping, if the norm of gradients exceeds 10. The size of the hidden layers was tuned to 64, with a learning rate of INLINEFORM0 , for both models. We apply a dropout of INLINEFORM1 to the word embeddings. Batch size was set to 25 and all models were trained for 150 epochs. During training, we measured performance on the development set, and we selected the model from the best performing epoch for testing."
],
[
"As an upper bound for model performance, we assess how well humans can solve our task. Two trained annotators labeled the correct answer on all instances of the test set. They agreed with the gold standard in 98.2 % of cases. This result shows that humans have no difficulty in finding the correct answer, irrespective of the question type.",
"Table TABREF37 shows the performance of the baseline models as compared to the human upper bound and a random baseline. As can be seen, neural models have a clear advantage over the pure word overlap baseline, which performs worst, with an accuracy of INLINEFORM0 .",
"The low accuracy is mostly due to the nature of correct answers in our data: Each correct answer has a low overlap with the text by design. Since the overlap model selects the answer with a high overlap to the text, it does not perform well. In particular, this also explains the very bad result on text-based questions. The sliding similarity window model does not outperform the simple word overlap model by a large margin: Distributional information alone is insufficient to handle complex questions in the dataset.",
"Both neural models outperform the unsupervised baselines by a large margin. When comparing the two models, the attentive reader is able to beat the bilinear model by only INLINEFORM0 . A possible explanation for this is that the attentive reader only attends to the text. Since many questions cannot be directly answered from the text, the attentive reader is not able to perform significantly better than a simpler neural model.",
"What is surprising is that the attentive reader works better on commonsense-based questions than on text questions. This can be explained by the fact that many commonsense questions do have prototypical answers within a scenario, irrespective of the text. The attentive reader is apparently able to just memorize these prototypical answers, thus achieving higher accuracy.",
"Inspecting attention values of the attentive reader, we found that in most cases, the model is unable to properly attend to the relevant parts of the text, even when the answer is literally given in the text. A possible explanation is that the model is confused by the large amount of questions that cannot be answered from the text directly, which might confound the computation of attention values.",
"Also, the attentive reader was originally constructed for reconstructing literal text spans as answers. Our mode of answer collection, however, results in many correct answers that cannot be found verbatim in the text. This presents difficulties for the attention mechanism.",
"The fact that an attention model outperforms a simple bilinear baseline only marginally shows that MCScript poses a new challenge to machine comprehension systems. Models concentrating solely on the text are insufficient to perform well on the data.",
"Figure FIGREF39 gives accuracy values of all baseline systems on the most frequent question types (appearing >25 times in the test data), as determined based on the question words (see Section SECREF19 ). The numbers depicted on the left-hand side of the y-axis represent model accuracy. The right-hand side of the y-axis indicates the number of times a question type appears in the test data.",
"The neural models unsurprisingly outperform the other models in most cases, and the difference for who questions is largest. A large number of these questions ask for the narrator of the story, who is usually not mentioned literally in the text, since most stories are written in the first person.",
"It is also apparent that all models perform rather badly on yes/no questions. Each model basically compares the answer to some representation of the text. For yes/no questions, this makes sense for less than half of all cases. For the majority of yes/no questions, however, answers consist only of yes or no, without further content words."
],
[
"In recent years, a number of reading comprehension datasets have been proposed, including MCTest BIBREF17 , BAbI BIBREF18 , the Children's Book Test (CBT, hill2015goldilocks), CNN/Daily Mail BIBREF14 , the Stanford Question Answering Dataset (SQuAD, rajpurkar2016squad), and RACE BIBREF19 . These datasets differ with respect to text type (Wikipedia texts, examination texts, etc.), mode of answer selection (span-based, multiple choice, etc.) and test systems regarding different aspects of language understanding, but they do not explicitly address commonsense knowledge.",
"Two notable exceptions are the NewsQA and TriviaQA datasets. NewsQA BIBREF20 is a dataset of newswire texts from CNN with questions and answers written by crowdsourcing workers. NewsQA closely resembles our own data collection with respect to the method of data acquisition. As for our data collection, full texts were not shown to workers as a basis for question formulation, but only the text's title and a short summary, to avoid literal repetitions and support the generation of non-trivial questions requiring background knowledge. The NewsQA text collection differs from ours in domain and genre (newswire texts vs. narrative stories about everyday events). Knowledge required to answer the questions is mostly factual knowledge and script knowledge is only marginally relevant. Also, the task is not exactly question answering, but identification of document passages containing the answer.",
"TriviaQA BIBREF21 is a corpus that contains automatically collected question-answer pairs from 14 trivia and quiz-league websites, together with web-crawled evidence documents from Wikipedia and Bing. While a majority of questions require world knowledge for finding the correct answer, it is mostly factual knowledge."
],
[
"We present a new dataset for the task of machine comprehension focussing on commonsense knowledge. Questions were collected based on script scenarios, rather than individual texts, which resulted in question–answer pairs that explicitly involve commonsense knowledge. In contrast to previous evaluation tasks, this setup allows us for the first time to assess the contribution of script knowledge for computational models of language understanding in a real-world evaluation scenario.",
"We expect our dataset to become a standard benchmark for testing models of commonsense and script knowledge. Human performance shows that the dataset is highly reliable. The results of several baselines, in contrast, illustrate that our task provides challenging test cases for the broader natural language processing community. MCScript forms the basis of a shared task organized at SemEval 2018. The dataset is available at http://www.sfb1102.uni-saarland.de/?page_id=2582."
],
[
"We thank the reviewers for their helpful comments. We also thank Florian Pusse for the help with the MTurk experiments and our student assistants Christine Schäfer, Damyana Gateva, Leonie Harter, Sarah Mameche, Stefan Grünewald and Tatiana Anikina for help with the annotations. This research was funded by the German Research Foundation (DFG) as part of SFB 1102 `Information Density and Linguistic Encoding' and EXC 284 `Multimodal Computing and Interaction'."
]
],
"section_name": [
"Introduction",
"Corpus",
"Pilot Study",
"Data Collection",
"Answer Selection and Validation",
"Data Statistics",
"Data Analysis",
"Experiments",
"Models",
"Implementation Details",
"Results and Evaluation",
"Related Work",
"Summary",
"Acknowledgements"
]
} | {
"answers": [
{
"annotation_id": [
"023591f9d4955d68b33becf69c45408a63ed0984",
"b4d0c33914204c403fbf5a8eb712fba2d342ab5d"
],
"answer": [
{
"evidence": [
"More than 2,100 texts were paired with 15 questions each, resulting in a total number of approx. 32,000 annotated questions. For 13% of the questions, the workers did not agree on one of the 4 categories with a 3 out of 5 majority, so we did not include these questions in our dataset.",
"The distribution of category labels on the remaining 87% is shown in Table TABREF10 . 14,074 (52%) questions could be answered. Out of the answerable questions, 10,160 could be answered from the text directly (text-based) and 3,914 questions required the use of commonsense knowledge (script-based). After removing 135 questions during the validation, the final dataset comprises 13,939 questions, 3,827 of which require commonsense knowledge (i.e. 27.4%). This ratio was manually verified based on a random sample of questions."
],
"extractive_spans": [],
"free_form_answer": "More than 2,100 texts were paired with 15 questions each, resulting in a total number of approx. 32,000 annotated questions. 13% of the questions are not answerable. Out of the answerable questions, 10,160 could be answered from the text directly (text-based) and 3,914 questions required the use of commonsense knowledge (script-based). The final dataset comprises 13,939 questions, 3,827 of which require commonsense knowledge (i.e. 27.4%).",
"highlighted_evidence": [
"More than 2,100 texts were paired with 15 questions each, resulting in a total number of approx. 32,000 annotated questions. For 13% of the questions, the workers did not agree on one of the 4 categories with a 3 out of 5 majority, so we did not include these questions in our dataset.\n\nThe distribution of category labels on the remaining 87% is shown in Table TABREF10 . 14,074 (52%) questions could be answered. Out of the answerable questions, 10,160 could be answered from the text directly (text-based) and 3,914 questions required the use of commonsense knowledge (script-based). After removing 135 questions during the validation, the final dataset comprises 13,939 questions, 3,827 of which require commonsense knowledge (i.e. 27.4%). This ratio was manually verified based on a random sample of questions."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The distribution of category labels on the remaining 87% is shown in Table TABREF10 . 14,074 (52%) questions could be answered. Out of the answerable questions, 10,160 could be answered from the text directly (text-based) and 3,914 questions required the use of commonsense knowledge (script-based). After removing 135 questions during the validation, the final dataset comprises 13,939 questions, 3,827 of which require commonsense knowledge (i.e. 27.4%). This ratio was manually verified based on a random sample of questions.",
"We split the dataset into training (9,731 questions on 1,470 texts), development (1,411 questions on 219 texts), and test set (2,797 questions on 430 texts). Each text appears only in one of the three sets. The complete set of texts for 5 scenarios was held out for the test set. The average text, question, and answer length is 196.0 words, 7.8 words, and 3.6 words, respectively. On average, there are 6.7 questions per text."
],
"extractive_spans": [],
"free_form_answer": "Distribution of category labels, number of answerable-not answerable questions, number of text-based and script-based questions, average text, question, and answer length, number of questions per text",
"highlighted_evidence": [
"The distribution of category labels on the remaining 87% is shown in Table TABREF10 . 14,074 (52%) questions could be answered. Out of the answerable questions, 10,160 could be answered from the text directly (text-based) and 3,914 questions required the use of commonsense knowledge (script-based). After removing 135 questions during the validation, the final dataset comprises 13,939 questions, 3,827 of which require commonsense knowledge (i.e. 27.4%). ",
"The average text, question, and answer length is 196.0 words, 7.8 words, and 3.6 words, respectively. On average, there are 6.7 questions per text."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"49042ff7ef49dfb8b354321fb28a4b1adea32800"
],
"answer": [
{
"evidence": [
"The distribution of category labels on the remaining 87% is shown in Table TABREF10 . 14,074 (52%) questions could be answered. Out of the answerable questions, 10,160 could be answered from the text directly (text-based) and 3,914 questions required the use of commonsense knowledge (script-based). After removing 135 questions during the validation, the final dataset comprises 13,939 questions, 3,827 of which require commonsense knowledge (i.e. 27.4%). This ratio was manually verified based on a random sample of questions."
],
"extractive_spans": [
"13,939"
],
"free_form_answer": "",
"highlighted_evidence": [
"After removing 135 questions during the validation, the final dataset comprises 13,939 questions, 3,827 of which require commonsense knowledge (i.e. 27.4%). "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"811b31db6251bb6f3b0695e5e8348fff298b6504",
"fbb1b2c3729b3e10cd18f484889576a93da43ebf"
],
"answer": [
{
"evidence": [
"Machine comprehension datasets consist of three main components: texts, questions and answers. In this section, we describe our data collection for these 3 components. We first describe a series of pilot studies that we conducted in order to collect commonsense inference questions (Section SECREF4 ). In Section SECREF5 , we discuss the resulting data collection of questions, texts and answers via crowdsourcing on Amazon Mechanical Turk (henceforth MTurk). Section SECREF17 gives information about some necessary postprocessing steps and the dataset validation. Lastly, Section SECREF19 gives statistics about the final dataset."
],
"extractive_spans": [
"Amazon Mechanical Turk"
],
"free_form_answer": "",
"highlighted_evidence": [
"In Section SECREF5 , we discuss the resulting data collection of questions, texts and answers via crowdsourcing on Amazon Mechanical Turk (henceforth MTurk)."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Machine comprehension datasets consist of three main components: texts, questions and answers. In this section, we describe our data collection for these 3 components. We first describe a series of pilot studies that we conducted in order to collect commonsense inference questions (Section SECREF4 ). In Section SECREF5 , we discuss the resulting data collection of questions, texts and answers via crowdsourcing on Amazon Mechanical Turk (henceforth MTurk). Section SECREF17 gives information about some necessary postprocessing steps and the dataset validation. Lastly, Section SECREF19 gives statistics about the final dataset."
],
"extractive_spans": [
"Amazon Mechanical Turk"
],
"free_form_answer": "",
"highlighted_evidence": [
" In Section SECREF5 , we discuss the resulting data collection of questions, texts and answers via crowdsourcing on Amazon Mechanical Turk (henceforth MTurk)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"24f6ef8dd567c63a004dc4f6f3bf48fb17e57ebf"
],
"answer": [
{
"evidence": [
"Machine comprehension datasets consist of three main components: texts, questions and answers. In this section, we describe our data collection for these 3 components. We first describe a series of pilot studies that we conducted in order to collect commonsense inference questions (Section SECREF4 ). In Section SECREF5 , we discuss the resulting data collection of questions, texts and answers via crowdsourcing on Amazon Mechanical Turk (henceforth MTurk). Section SECREF17 gives information about some necessary postprocessing steps and the dataset validation. Lastly, Section SECREF19 gives statistics about the final dataset."
],
"extractive_spans": [],
"free_form_answer": "The data was collected using 3 components: describe a series of pilot studies that were conducted to collect commonsense inference questions, then discuss the resulting data collection of questions, texts and answers via crowdsourcing on Amazon Mechanical Turk and gives information about some necessary postprocessing steps and the dataset validation.",
"highlighted_evidence": [
"In this section, we describe our data collection for these 3 components. We first describe a series of pilot studies that we conducted in order to collect commonsense inference questions (Section SECREF4 ). In Section SECREF5 , we discuss the resulting data collection of questions, texts and answers via crowdsourcing on Amazon Mechanical Turk (henceforth MTurk). Section SECREF17 gives information about some necessary postprocessing steps and the dataset validation. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
],
"nlp_background": [
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
""
],
"question": [
"what dataset statistics are provided?",
"what is the size of their dataset?",
"what crowdsourcing platform was used?",
"how was the data collected?"
],
"question_id": [
"ad1be65c4f0655ac5c902d17f05454c0d4c4a15d",
"2eb9280d72cde9de3aabbed993009a98a5fe0990",
"154a721ccc1d425688942e22e75af711b423e086",
"84bad9a821917cb96584cf5383c6d2a035358d7c"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
""
]
} | {
"caption": [
"Figure 1: An example for a text snippet with two reading comprehension questions.",
"Table 1: Distribution of question categories",
"Figure 2: Distribution of question types in the data.",
"Figure 3: An example text with 2 questions from MCScript",
"Figure 4: Accuracy values of the baseline models on question types appearing > 25 times.",
"Table 2: Accuracy of the baseline systems on text-based (Text), on commonsense-based questions (CS), and on the whole test set (Total). All numbers are percentages."
],
"file": [
"1-Figure1-1.png",
"3-Table1-1.png",
"4-Figure2-1.png",
"4-Figure3-1.png",
"6-Figure4-1.png",
"6-Table2-1.png"
]
} | [
"what dataset statistics are provided?",
"how was the data collected?"
] | [
[
"1803.05223-Data Statistics-3",
"1803.05223-Data Statistics-1",
"1803.05223-Data Statistics-2"
],
[
"1803.05223-Corpus-0"
]
] | [
"Distribution of category labels, number of answerable-not answerable questions, number of text-based and script-based questions, average text, question, and answer length, number of questions per text",
"The data was collected using 3 components: describe a series of pilot studies that were conducted to collect commonsense inference questions, then discuss the resulting data collection of questions, texts and answers via crowdsourcing on Amazon Mechanical Turk and gives information about some necessary postprocessing steps and the dataset validation."
] | 217 |
1909.06162 | Neural Architectures for Fine-Grained Propaganda Detection in News | This paper describes our system (MIC-CIS) details and results of participation in the fine-grained propaganda detection shared task 2019. To address the tasks of sentence (SLC) and fragment level (FLC) propaganda detection, we explore different neural architectures (e.g., CNN, LSTM-CRF and BERT) and extract linguistic (e.g., part-of-speech, named entity, readability, sentiment, emotion, etc.), layout and topical features. Specifically, we have designed multi-granularity and multi-tasking neural architectures to jointly perform both the sentence and fragment level propaganda detection. Additionally, we investigate different ensemble schemes such as majority-voting, relax-voting, etc. to boost overall system performance. Compared to the other participating systems, our submissions are ranked 3rd and 4th in FLC and SLC tasks, respectively. | {
"paragraphs": [
[
"In the age of information dissemination without quality control, it has enabled malicious users to spread misinformation via social media and aim individual users with propaganda campaigns to achieve political and financial gains as well as advance a specific agenda. Often disinformation is complied in the two major forms: fake news and propaganda, where they differ in the sense that the propaganda is possibly built upon true information (e.g., biased, loaded language, repetition, etc.).",
"Prior works BIBREF0, BIBREF1, BIBREF2 in detecting propaganda have focused primarily at document level, typically labeling all articles from a propagandistic news outlet as propaganda and thus, often non-propagandistic articles from the outlet are mislabeled. To this end, EMNLP19DaSanMartino focuses on analyzing the use of propaganda and detecting specific propagandistic techniques in news articles at sentence and fragment level, respectively and thus, promotes explainable AI. For instance, the following text is a propaganda of type `slogan'.",
"Trump tweeted: $\\underbrace{\\text{`}`{\\texttt {BUILD THE WALL!}\"}}_{\\text{slogan}}$",
"Shared Task: This work addresses the two tasks in propaganda detection BIBREF3 of different granularities: (1) Sentence-level Classification (SLC), a binary classification that predicts whether a sentence contains at least one propaganda technique, and (2) Fragment-level Classification (FLC), a token-level (multi-label) classification that identifies both the spans and the type of propaganda technique(s).",
"Contributions: (1) To address SLC, we design an ensemble of different classifiers based on Logistic Regression, CNN and BERT, and leverage transfer learning benefits using the pre-trained embeddings/models from FastText and BERT. We also employed different features such as linguistic (sentiment, readability, emotion, part-of-speech and named entity tags, etc.), layout, topics, etc. (2) To address FLC, we design a multi-task neural sequence tagger based on LSTM-CRF and linguistic features to jointly detect propagandistic fragments and its type. Moreover, we investigate performing FLC and SLC jointly in a multi-granularity network based on LSTM-CRF and BERT. (3) Our system (MIC-CIS) is ranked 3rd (out of 12 participants) and 4th (out of 25 participants) in FLC and SLC tasks, respectively."
],
[
"Some of the propaganda techniques BIBREF3 involve word and phrases that express strong emotional implications, exaggeration, minimization, doubt, national feeling, labeling , stereotyping, etc. This inspires us in extracting different features (Table TABREF1) including the complexity of text, sentiment, emotion, lexical (POS, NER, etc.), layout, etc. To further investigate, we use topical features (e.g., document-topic proportion) BIBREF4, BIBREF5, BIBREF6 at sentence and document levels in order to determine irrelevant themes, if introduced to the issue being discussed (e.g., Red Herring).",
"For word and sentence representations, we use pre-trained vectors from FastText BIBREF7 and BERT BIBREF8."
],
[
"Figure FIGREF2 (left) describes the three components of our system for SLC task: features, classifiers and ensemble. The arrows from features-to-classifier indicate that we investigate linguistic, layout and topical features in the two binary classifiers: LogisticRegression and CNN. For CNN, we follow the architecture of DBLP:conf/emnlp/Kim14 for sentence-level classification, initializing the word vectors by FastText or BERT. We concatenate features in the last hidden layer before classification.",
"One of our strong classifiers includes BERT that has achieved state-of-the-art performance on multiple NLP benchmarks. Following DBLP:conf/naacl/DevlinCLT19, we fine-tune BERT for binary classification, initializing with a pre-trained model (i.e., BERT-base, Cased). Additionally, we apply a decision function such that a sentence is tagged as propaganda if prediction probability of the classifier is greater than a threshold ($\\tau $). We relax the binary decision boundary to boost recall, similar to pankajgupta:CrossRE2019.",
"Ensemble of Logistic Regression, CNN and BERT: In the final component, we collect predictions (i.e., propaganda label) for each sentence from the three ($\\mathcal {M}=3$) classifiers and thus, obtain $\\mathcal {M}$ number of predictions for each sentence. We explore two ensemble strategies (Table TABREF1): majority-voting and relax-voting to boost precision and recall, respectively."
],
[
"Figure FIGREF2 (right) describes our system for FLC task, where we design sequence taggers BIBREF9, BIBREF10 in three modes: (1) LSTM-CRF BIBREF11 with word embeddings ($w\\_e$) and character embeddings $c\\_e$, token-level features ($t\\_f$) such as polarity, POS, NER, etc. (2) LSTM-CRF+Multi-grain that jointly performs FLC and SLC with FastTextWordEmb and BERTSentEmb, respectively. Here, we add binary sentence classification loss to sequence tagging weighted by a factor of $\\alpha $. (3) LSTM-CRF+Multi-task that performs propagandistic span/fragment detection (PFD) and FLC (fragment detection + 19-way classification).",
"Ensemble of Multi-grain, Multi-task LSTM-CRF with BERT: Here, we build an ensemble by considering propagandistic fragments (and its type) from each of the sequence taggers. In doing so, we first perform majority voting at the fragment level for the fragment where their spans exactly overlap. In case of non-overlapping fragments, we consider all. However, when the spans overlap (though with the same label), we consider the fragment with the largest span."
],
[
"Data: While the SLC task is binary, the FLC consists of 18 propaganda techniques BIBREF3. We split (80-20%) the annotated corpus into 5-folds and 3-folds for SLC and FLC tasks, respectively. The development set of each the folds is represented by dev (internal); however, the un-annotated corpus used in leaderboard comparisons by dev (external). We remove empty and single token sentences after tokenization. Experimental Setup: We use PyTorch framework for the pre-trained BERT model (Bert-base-cased), fine-tuned for SLC task. In the multi-granularity loss, we set $\\alpha = 0.1$ for sentence classification based on dev (internal, fold1) scores. We use BIO tagging scheme of NER in FLC task. For CNN, we follow DBLP:conf/emnlp/Kim14 with filter-sizes of [2, 3, 4, 5, 6], 128 filters and 16 batch-size. We compute binary-F1and macro-F1 BIBREF12 in SLC and FLC, respectively on dev (internal)."
],
[
"Table TABREF10 shows the scores on dev (internal and external) for SLC task. Observe that the pre-trained embeddings (FastText or BERT) outperform TF-IDF vector representation. In row r2, we apply logistic regression classifier with BERTSentEmb that leads to improved scores over FastTextSentEmb. Subsequently, we augment the sentence vector with additional features that improves F1 on dev (external), however not dev (internal). Next, we initialize CNN by FastTextWordEmb or BERTWordEmb and augment the last hidden layer (before classification) with BERTSentEmb and feature vectors, leading to gains in F1 for both the dev sets. Further, we fine-tune BERT and apply different thresholds in relaxing the decision boundary, where $\\tau \\ge 0.35$ is found optimal.",
"We choose the three different models in the ensemble: Logistic Regression, CNN and BERT on fold1 and subsequently an ensemble+ of r3, r6 and r12 from each fold1-5 (i.e., 15 models) to obtain predictions for dev (external). We investigate different ensemble schemes (r17-r19), where we observe that the relax-voting improves recall and therefore, the higher F1 (i.e., 0.673). In postprocess step, we check for repetition propaganda technique by computing cosine similarity between the current sentence and its preceding $w=10$ sentence vectors (i.e., BERTSentEmb) in the document. If the cosine-similarity is greater than $\\lambda \\in \\lbrace .99, .95\\rbrace $, then the current sentence is labeled as propaganda due to repetition. Comparing r19 and r21, we observe a gain in recall, however an overall decrease in F1 applying postprocess.",
"Finally, we use the configuration of r19 on the test set. The ensemble+ of (r4, r7 r12) was analyzed after test submission. Table TABREF9 (SLC) shows that our submission is ranked at 4th position."
],
[
"Table TABREF11 shows the scores on dev (internal and external) for FLC task. Observe that the features (i.e., polarity, POS and NER in row II) when introduced in LSTM-CRF improves F1. We run multi-grained LSTM-CRF without BERTSentEmb (i.e., row III) and with it (i.e., row IV), where the latter improves scores on dev (internal), however not on dev (external). Finally, we perform multi-tasking with another auxiliary task of PFD. Given the scores on dev (internal and external) using different configurations (rows I-V), it is difficult to infer the optimal configuration. Thus, we choose the two best configurations (II and IV) on dev (internal) set and build an ensemble+ of predictions (discussed in section SECREF6), leading to a boost in recall and thus an improved F1 on dev (external).",
"Finally, we use the ensemble+ of (II and IV) from each of the folds 1-3, i.e., $|{\\mathcal {M}}|=6$ models to obtain predictions on test. Table TABREF9 (FLC) shows that our submission is ranked at 3rd position."
],
[
"Our system (Team: MIC-CIS) explores different neural architectures (CNN, BERT and LSTM-CRF) with linguistic, layout and topical features to address the tasks of fine-grained propaganda detection. We have demonstrated gains in performance due to the features, ensemble schemes, multi-tasking and multi-granularity architectures. Compared to the other participating systems, our submissions are ranked 3rd and 4th in FLC and SLC tasks, respectively.",
"In future, we would like to enrich BERT models with linguistic, layout and topical features during their fine-tuning. Further, we would also be interested in understanding and analyzing the neural network learning, i.e., extracting salient fragments (or key-phrases) in the sentence that generate propaganda, similar to pankajgupta:2018LISA in order to promote explainable AI."
]
],
"section_name": [
"Introduction",
"System Description ::: Linguistic, Layout and Topical Features",
"System Description ::: Sentence-level Propaganda Detection",
"System Description ::: Fragment-level Propaganda Detection",
"Experiments and Evaluation",
"Experiments and Evaluation ::: Results: Sentence-Level Propaganda",
"Experiments and Evaluation ::: Results: Fragment-Level Propaganda",
"Conclusion and Future Work"
]
} | {
"answers": [
{
"annotation_id": [
"502aa7bfda8225dda26eefab72dabea07bb074b9"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 2: Comparison of our system (MIC-CIS) with top-5 participants: Scores on Test set for SLC and FLC"
],
"extractive_spans": [],
"free_form_answer": "For SLC task, the \"ltuorp\" team has the best performing model (0.6323/0.6028/0.6649 for F1/P/R respectively) and for FLC task the \"newspeak\" team has the best performing model (0.2488/0.2863/0.2201 for F1/P/R respectively).",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: Comparison of our system (MIC-CIS) with top-5 participants: Scores on Test set for SLC and FLC"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"023ef64a191c531e518ec6d58c5342c9fc040dca",
"9e11478e485e47eefcfd4b861442884657d22897"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 3: SLC: Scores on Dev (internal) of Fold1 and Dev (external) using different classifiers and features."
],
"extractive_spans": [],
"free_form_answer": "Linguistic",
"highlighted_evidence": [
"FLOAT SELECTED: Table 3: SLC: Scores on Dev (internal) of Fold1 and Dev (external) using different classifiers and features."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Shared Task: This work addresses the two tasks in propaganda detection BIBREF3 of different granularities: (1) Sentence-level Classification (SLC), a binary classification that predicts whether a sentence contains at least one propaganda technique, and (2) Fragment-level Classification (FLC), a token-level (multi-label) classification that identifies both the spans and the type of propaganda technique(s).",
"Contributions: (1) To address SLC, we design an ensemble of different classifiers based on Logistic Regression, CNN and BERT, and leverage transfer learning benefits using the pre-trained embeddings/models from FastText and BERT. We also employed different features such as linguistic (sentiment, readability, emotion, part-of-speech and named entity tags, etc.), layout, topics, etc. (2) To address FLC, we design a multi-task neural sequence tagger based on LSTM-CRF and linguistic features to jointly detect propagandistic fragments and its type. Moreover, we investigate performing FLC and SLC jointly in a multi-granularity network based on LSTM-CRF and BERT. (3) Our system (MIC-CIS) is ranked 3rd (out of 12 participants) and 4th (out of 25 participants) in FLC and SLC tasks, respectively.",
"Table TABREF10 shows the scores on dev (internal and external) for SLC task. Observe that the pre-trained embeddings (FastText or BERT) outperform TF-IDF vector representation. In row r2, we apply logistic regression classifier with BERTSentEmb that leads to improved scores over FastTextSentEmb. Subsequently, we augment the sentence vector with additional features that improves F1 on dev (external), however not dev (internal). Next, we initialize CNN by FastTextWordEmb or BERTWordEmb and augment the last hidden layer (before classification) with BERTSentEmb and feature vectors, leading to gains in F1 for both the dev sets. Further, we fine-tune BERT and apply different thresholds in relaxing the decision boundary, where $\\tau \\ge 0.35$ is found optimal."
],
"extractive_spans": [
"BERT"
],
"free_form_answer": "",
"highlighted_evidence": [
" This work addresses the two tasks in propaganda detection BIBREF3 of different granularities: (1) Sentence-level Classification (SLC), a binary classification that predicts whether a sentence contains at least one propaganda technique, and (2) Fragment-level Classification (FLC), a token-level (multi-label) classification that identifies both the spans and the type of propaganda technique(s).",
"To address SLC, we design an ensemble of different classifiers based on Logistic Regression, CNN and BERT, and leverage transfer learning benefits using the pre-trained embeddings/models from FastText and BERT. ",
"Table TABREF10 shows the scores on dev (internal and external) for SLC task. Observe that the pre-trained embeddings (FastText or BERT) outperform TF-IDF vector representation. In row r2, we apply logistic regression classifier with BERTSentEmb that leads to improved scores over FastTextSentEmb. Subsequently, we augment the sentence vector with additional features that improves F1 on dev (external), however not dev (internal). Next, we initialize CNN by FastTextWordEmb or BERTWordEmb and augment the last hidden layer (before classification) with BERTSentEmb and feature vectors, leading to gains in F1 for both the dev sets. Further, we fine-tune BERT and apply different thresholds in relaxing the decision boundary, where $\\tau \\ge 0.35$ is found optimal."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
]
},
{
"annotation_id": [
"531de9911ef2e40e170927f15309b954df0d9e24",
"f5f6bb057e95cc4c6fc4d9237a726d46e8d970b0"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 3: SLC: Scores on Dev (internal) of Fold1 and Dev (external) using different classifiers and features."
],
"extractive_spans": [],
"free_form_answer": "The best ensemble topped the best single model by 0.029 in F1 score on dev (external).",
"highlighted_evidence": [
"FLOAT SELECTED: Table 3: SLC: Scores on Dev (internal) of Fold1 and Dev (external) using different classifiers and features."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Table TABREF10 shows the scores on dev (internal and external) for SLC task. Observe that the pre-trained embeddings (FastText or BERT) outperform TF-IDF vector representation. In row r2, we apply logistic regression classifier with BERTSentEmb that leads to improved scores over FastTextSentEmb. Subsequently, we augment the sentence vector with additional features that improves F1 on dev (external), however not dev (internal). Next, we initialize CNN by FastTextWordEmb or BERTWordEmb and augment the last hidden layer (before classification) with BERTSentEmb and feature vectors, leading to gains in F1 for both the dev sets. Further, we fine-tune BERT and apply different thresholds in relaxing the decision boundary, where $\\tau \\ge 0.35$ is found optimal.",
"We choose the three different models in the ensemble: Logistic Regression, CNN and BERT on fold1 and subsequently an ensemble+ of r3, r6 and r12 from each fold1-5 (i.e., 15 models) to obtain predictions for dev (external). We investigate different ensemble schemes (r17-r19), where we observe that the relax-voting improves recall and therefore, the higher F1 (i.e., 0.673). In postprocess step, we check for repetition propaganda technique by computing cosine similarity between the current sentence and its preceding $w=10$ sentence vectors (i.e., BERTSentEmb) in the document. If the cosine-similarity is greater than $\\lambda \\in \\lbrace .99, .95\\rbrace $, then the current sentence is labeled as propaganda due to repetition. Comparing r19 and r21, we observe a gain in recall, however an overall decrease in F1 applying postprocess.",
"Finally, we use the configuration of r19 on the test set. The ensemble+ of (r4, r7 r12) was analyzed after test submission. Table TABREF9 (SLC) shows that our submission is ranked at 4th position.",
"FLOAT SELECTED: Table 3: SLC: Scores on Dev (internal) of Fold1 and Dev (external) using different classifiers and features.",
"Table TABREF11 shows the scores on dev (internal and external) for FLC task. Observe that the features (i.e., polarity, POS and NER in row II) when introduced in LSTM-CRF improves F1. We run multi-grained LSTM-CRF without BERTSentEmb (i.e., row III) and with it (i.e., row IV), where the latter improves scores on dev (internal), however not on dev (external). Finally, we perform multi-tasking with another auxiliary task of PFD. Given the scores on dev (internal and external) using different configurations (rows I-V), it is difficult to infer the optimal configuration. Thus, we choose the two best configurations (II and IV) on dev (internal) set and build an ensemble+ of predictions (discussed in section SECREF6), leading to a boost in recall and thus an improved F1 on dev (external).",
"Finally, we use the ensemble+ of (II and IV) from each of the folds 1-3, i.e., $|{\\mathcal {M}}|=6$ models to obtain predictions on test. Table TABREF9 (FLC) shows that our submission is ranked at 3rd position."
],
"extractive_spans": [],
"free_form_answer": "They increased F1 Score by 0.029 in Sentence Level Classification, and by 0.044 in Fragment-Level classification",
"highlighted_evidence": [
"Table TABREF10 shows the scores on dev (internal and external) for SLC task. Observe that the pre-trained embeddings (FastText or BERT) outperform TF-IDF vector representation. In row r2, we apply logistic regression classifier with BERTSentEmb that leads to improved scores over FastTextSentEmb. Subsequently, we augment the sentence vector with additional features that improves F1 on dev (external), however not dev (internal). Next, we initialize CNN by FastTextWordEmb or BERTWordEmb and augment the last hidden layer (before classification) with BERTSentEmb and feature vectors, leading to gains in F1 for both the dev sets. Further, we fine-tune BERT and apply different thresholds in relaxing the decision boundary, where $\\tau \\ge 0.35$ is found optimal.\n\nWe choose the three different models in the ensemble: Logistic Regression, CNN and BERT on fold1 and subsequently an ensemble+ of r3, r6 and r12 from each fold1-5 (i.e., 15 models) to obtain predictions for dev (external). We investigate different ensemble schemes (r17-r19), where we observe that the relax-voting improves recall and therefore, the higher F1 (i.e., 0.673). In postprocess step, we check for repetition propaganda technique by computing cosine similarity between the current sentence and its preceding $w=10$ sentence vectors (i.e., BERTSentEmb) in the document. If the cosine-similarity is greater than $\\lambda \\in \\lbrace .99, .95\\rbrace $, then the current sentence is labeled as propaganda due to repetition. Comparing r19 and r21, we observe a gain in recall, however an overall decrease in F1 applying postprocess.\n\nFinally, we use the configuration of r19 on the test set. The ensemble+ of (r4, r7 r12) was analyzed after test submission. Table TABREF9 (SLC) shows that our submission is ranked at 4th position.",
"FLOAT SELECTED: Table 3: SLC: Scores on Dev (internal) of Fold1 and Dev (external) using different classifiers and features.",
"Table TABREF11 shows the scores on dev (internal and external) for FLC task. Observe that the features (i.e., polarity, POS and NER in row II) when introduced in LSTM-CRF improves F1. We run multi-grained LSTM-CRF without BERTSentEmb (i.e., row III) and with it (i.e., row IV), where the latter improves scores on dev (internal), however not on dev (external). Finally, we perform multi-tasking with another auxiliary task of PFD. Given the scores on dev (internal and external) using different configurations (rows I-V), it is difficult to infer the optimal configuration. Thus, we choose the two best configurations (II and IV) on dev (internal) set and build an ensemble+ of predictions (discussed in section SECREF6), leading to a boost in recall and thus an improved F1 on dev (external).\n\nFinally, we use the ensemble+ of (II and IV) from each of the folds 1-3, i.e., $|{\\mathcal {M}}|=6$ models to obtain predictions on test. Table TABREF9 (FLC) shows that our submission is ranked at 3rd position."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
]
},
{
"annotation_id": [
"164ad7041ea4682f9a65393079db6343b9fd3761"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 3: SLC: Scores on Dev (internal) of Fold1 and Dev (external) using different classifiers and features."
],
"extractive_spans": [],
"free_form_answer": "BERT",
"highlighted_evidence": [
"FLOAT SELECTED: Table 3: SLC: Scores on Dev (internal) of Fold1 and Dev (external) using different classifiers and features."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"818d5c801dd3b1073ec068f0d85c7bb770843979"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 2: Comparison of our system (MIC-CIS) with top-5 participants: Scores on Test set for SLC and FLC"
],
"extractive_spans": [],
"free_form_answer": "For SLC task : Ituorp, ProperGander and YMJA teams had better results.\nFor FLC task: newspeak and Antiganda teams had better results.",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: Comparison of our system (MIC-CIS) with top-5 participants: Scores on Test set for SLC and FLC"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"4cf500b9450b692c00423f3faa1dcc2c0fbb1073",
"5baa750a892e669cf9a5ef7987044dd358667356"
],
"answer": [
{
"evidence": [
"Shared Task: This work addresses the two tasks in propaganda detection BIBREF3 of different granularities: (1) Sentence-level Classification (SLC), a binary classification that predicts whether a sentence contains at least one propaganda technique, and (2) Fragment-level Classification (FLC), a token-level (multi-label) classification that identifies both the spans and the type of propaganda technique(s).",
"Contributions: (1) To address SLC, we design an ensemble of different classifiers based on Logistic Regression, CNN and BERT, and leverage transfer learning benefits using the pre-trained embeddings/models from FastText and BERT. We also employed different features such as linguistic (sentiment, readability, emotion, part-of-speech and named entity tags, etc.), layout, topics, etc. (2) To address FLC, we design a multi-task neural sequence tagger based on LSTM-CRF and linguistic features to jointly detect propagandistic fragments and its type. Moreover, we investigate performing FLC and SLC jointly in a multi-granularity network based on LSTM-CRF and BERT. (3) Our system (MIC-CIS) is ranked 3rd (out of 12 participants) and 4th (out of 25 participants) in FLC and SLC tasks, respectively.",
"Figure FIGREF2 (right) describes our system for FLC task, where we design sequence taggers BIBREF9, BIBREF10 in three modes: (1) LSTM-CRF BIBREF11 with word embeddings ($w\\_e$) and character embeddings $c\\_e$, token-level features ($t\\_f$) such as polarity, POS, NER, etc. (2) LSTM-CRF+Multi-grain that jointly performs FLC and SLC with FastTextWordEmb and BERTSentEmb, respectively. Here, we add binary sentence classification loss to sequence tagging weighted by a factor of $\\alpha $. (3) LSTM-CRF+Multi-task that performs propagandistic span/fragment detection (PFD) and FLC (fragment detection + 19-way classification).",
"FLOAT SELECTED: Figure 1: (Left): System description for SLC, including features, transfer learning using pre-trained word embeddings from FastText and BERT and classifiers: LogisticRegression, CNN and BERT fine-tuning. (Right): System description for FLC, including multi-tasking LSTM-CRF architecture consisting of Propaganda Fragment Detection (PFD) and FLC layers. Observe, a binary classification component at the last hidden layer in the recurrent architecture that jointly performs PFD, FLC and SLC tasks (i.e., multi-grained propaganda detection). Here, P: Propaganda, NP: Non-propaganda, B/I/O: Begin, Intermediate and Other tags of BIO tagging scheme."
],
"extractive_spans": [],
"free_form_answer": "An output layer for each task",
"highlighted_evidence": [
"This work addresses the two tasks in propaganda detection BIBREF3 of different granularities: (1) Sentence-level Classification (SLC), a binary classification that predicts whether a sentence contains at least one propaganda technique, and (2) Fragment-level Classification (FLC), a token-level (multi-label) classification that identifies both the spans and the type of propaganda technique(s).",
"To address FLC, we design a multi-task neural sequence tagger based on LSTM-CRF and linguistic features to jointly detect propagandistic fragments and its type. Moreover, we investigate performing FLC and SLC jointly in a multi-granularity network based on LSTM-CRF and BERT.",
"Figure FIGREF2 (right) describes our system for FLC task, where we design sequence taggers BIBREF9, BIBREF10 in three modes: (1) LSTM-CRF BIBREF11 with word embeddings ($w\\_e$) and character embeddings $c\\_e$, token-level features ($t\\_f$) such as polarity, POS, NER, etc. (2) LSTM-CRF+Multi-grain that jointly performs FLC and SLC with FastTextWordEmb and BERTSentEmb, respectively. Here, we add binary sentence classification loss to sequence tagging weighted by a factor of $\\alpha $. (3) LSTM-CRF+Multi-task that performs propagandistic span/fragment detection (PFD) and FLC (fragment detection + 19-way classification).",
"FLOAT SELECTED: Figure 1: (Left): System description for SLC, including features, transfer learning using pre-trained word embeddings from FastText and BERT and classifiers: LogisticRegression, CNN and BERT fine-tuning. (Right): System description for FLC, including multi-tasking LSTM-CRF architecture consisting of Propaganda Fragment Detection (PFD) and FLC layers. Observe, a binary classification component at the last hidden layer in the recurrent architecture that jointly performs PFD, FLC and SLC tasks (i.e., multi-grained propaganda detection). Here, P: Propaganda, NP: Non-propaganda, B/I/O: Begin, Intermediate and Other tags of BIO tagging scheme."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Contributions: (1) To address SLC, we design an ensemble of different classifiers based on Logistic Regression, CNN and BERT, and leverage transfer learning benefits using the pre-trained embeddings/models from FastText and BERT. We also employed different features such as linguistic (sentiment, readability, emotion, part-of-speech and named entity tags, etc.), layout, topics, etc. (2) To address FLC, we design a multi-task neural sequence tagger based on LSTM-CRF and linguistic features to jointly detect propagandistic fragments and its type. Moreover, we investigate performing FLC and SLC jointly in a multi-granularity network based on LSTM-CRF and BERT. (3) Our system (MIC-CIS) is ranked 3rd (out of 12 participants) and 4th (out of 25 participants) in FLC and SLC tasks, respectively."
],
"extractive_spans": [],
"free_form_answer": "Multi-tasking is addressed by neural sequence tagger based on LSTM-CRF and linguistic features, while multi-granularity is addressed by ensemble of LSTM-CRF and BERT.",
"highlighted_evidence": [
"To address FLC, we design a multi-task neural sequence tagger based on LSTM-CRF and linguistic features to jointly detect propagandistic fragments and its type. Moreover, we investigate performing FLC and SLC jointly in a multi-granularity network based on LSTM-CRF and BERT."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
}
],
"nlp_background": [
"zero",
"zero",
"zero",
"zero",
"zero",
"zero"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no",
"no"
],
"question": [
"What is best performing model among author's submissions, what performance it had?",
"What extracted features were most influencial on performance?",
"Did ensemble schemes help in boosting peformance, by how much?",
"Which basic neural architecture perform best by itself?",
"What participating systems had better results than ones authors submitted?",
"What is specific to multi-granularity and multi-tasking neural arhiteture design?"
],
"question_id": [
"c9305e5794b65b33399c22ac8e4e024f6b757a30",
"56b7319be68197727baa7d498fa38af0a8440fe4",
"2268c9044e868ba0a16e92d2063ada87f68b5d03",
"6b7354d7d715bad83183296ce2f3ddf2357cb449",
"e949b28f6d1f20e18e82742e04d68158415dc61e",
"a1ac2a152710335519c9a907eec60d9f468b19db"
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"search_query": [
"",
"",
"",
"",
"",
""
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Table 1: Features used in SLC and FLC tasks",
"Figure 1: (Left): System description for SLC, including features, transfer learning using pre-trained word embeddings from FastText and BERT and classifiers: LogisticRegression, CNN and BERT fine-tuning. (Right): System description for FLC, including multi-tasking LSTM-CRF architecture consisting of Propaganda Fragment Detection (PFD) and FLC layers. Observe, a binary classification component at the last hidden layer in the recurrent architecture that jointly performs PFD, FLC and SLC tasks (i.e., multi-grained propaganda detection). Here, P: Propaganda, NP: Non-propaganda, B/I/O: Begin, Intermediate and Other tags of BIO tagging scheme.",
"Table 2: Comparison of our system (MIC-CIS) with top-5 participants: Scores on Test set for SLC and FLC",
"Table 4: FLC: Scores on Dev (internal) of Fold1 and Dev (external) with different models, features and ensembles. PFD: Propaganda Fragment Detection.",
"Table 3: SLC: Scores on Dev (internal) of Fold1 and Dev (external) using different classifiers and features."
],
"file": [
"2-Table1-1.png",
"3-Figure1-1.png",
"3-Table2-1.png",
"4-Table4-1.png",
"4-Table3-1.png"
]
} | [
"What is best performing model among author's submissions, what performance it had?",
"What extracted features were most influencial on performance?",
"Did ensemble schemes help in boosting peformance, by how much?",
"Which basic neural architecture perform best by itself?",
"What participating systems had better results than ones authors submitted?",
"What is specific to multi-granularity and multi-tasking neural arhiteture design?"
] | [
[
"1909.06162-3-Table2-1.png"
],
[
"1909.06162-4-Table3-1.png",
"1909.06162-Experiments and Evaluation ::: Results: Sentence-Level Propaganda-0",
"1909.06162-Introduction-3",
"1909.06162-Introduction-4"
],
[
"1909.06162-Experiments and Evaluation ::: Results: Fragment-Level Propaganda-0",
"1909.06162-Experiments and Evaluation ::: Results: Sentence-Level Propaganda-1",
"1909.06162-Experiments and Evaluation ::: Results: Fragment-Level Propaganda-1",
"1909.06162-Experiments and Evaluation ::: Results: Sentence-Level Propaganda-2",
"1909.06162-4-Table3-1.png",
"1909.06162-Experiments and Evaluation ::: Results: Sentence-Level Propaganda-0"
],
[
"1909.06162-4-Table3-1.png"
],
[
"1909.06162-3-Table2-1.png"
],
[
"1909.06162-System Description ::: Fragment-level Propaganda Detection-0",
"1909.06162-3-Figure1-1.png",
"1909.06162-Introduction-3",
"1909.06162-Introduction-4"
]
] | [
"For SLC task, the \"ltuorp\" team has the best performing model (0.6323/0.6028/0.6649 for F1/P/R respectively) and for FLC task the \"newspeak\" team has the best performing model (0.2488/0.2863/0.2201 for F1/P/R respectively).",
"Linguistic",
"They increased F1 Score by 0.029 in Sentence Level Classification, and by 0.044 in Fragment-Level classification",
"BERT",
"For SLC task : Ituorp, ProperGander and YMJA teams had better results.\nFor FLC task: newspeak and Antiganda teams had better results.",
"Multi-tasking is addressed by neural sequence tagger based on LSTM-CRF and linguistic features, while multi-granularity is addressed by ensemble of LSTM-CRF and BERT."
] | 218 |
2002.07306 | From English To Foreign Languages: Transferring Pre-trained Language Models | Pre-trained models have demonstrated their effectiveness in many downstream natural language processing (NLP) tasks. The availability of multilingual pre-trained models enables zero-shot transfer of NLP tasks from high resource languages to low resource ones. However, recent research in improving pre-trained models focuses heavily on English. While it is possible to train the latest neural architectures for other languages from scratch, it is undesirable due to the required amount of compute. In this work, we tackle the problem of transferring an existing pre-trained model from English to other languages under a limited computational budget. With a single GPU, our approach can obtain a foreign BERT base model within a day and a foreign BERT large within two days. Furthermore, evaluating our models on six languages, we demonstrate that our models are better than multilingual BERT on two zero-shot tasks: natural language inference and dependency parsing. | {
"paragraphs": [
[
"Pre-trained models BIBREF0, BIBREF1 have received much of attention recently thanks to their impressive results in many down stream NLP tasks. Additionally, multilingual pre-trained models enable many NLP applications for other languages via zero-short cross-lingual transfer. Zero-shot cross-lingual transfer has shown promising results for rapidly building applications for low resource languages. BIBREF2 show the potential of multilingual-BERT BIBREF0 in zero-shot transfer for a large number of languages from different language families on five NLP tasks, namely, natural language inference, document classification, named entity recognition, part-of-speech tagging, and dependency parsing.",
"Although multilingual models are an important ingredient for enhancing language technology in many languages, recent research on improving pre-trained models puts much emphasis on English BIBREF3, BIBREF4, BIBREF5. The current state of affairs makes it difficult to translate advancements in pre-training from English to non-English languages. To our best knowledge, there are only three available multilingual pre-trained models to date: (1) the multilingual-BERT (mBERT) that supports 104 languages, (2) cross-lingual language model BIBREF6 that supports 100 languages, and (3) Language Agnostic SEntence Representations BIBREF7 that supports 93 languages. Among the three models, LASER is based on neural machine translation approach and strictly requires parallel data to train.",
"Do multilingual models always need to be trained from scratch? Can we transfer linguistic knowledge learned by English pre-trained models to other languages? In this work, we develop a technique to rapidly transfer an existing pre-trained model from English to other languages in an energy efficient way BIBREF8. As the first step, we focus on building a bilingual language model (LM) of English and a target language. Starting from a pre-trained English LM, we learn the target language specific parameters (i.e., word embeddings), while keeping the encoder layers of the pre-trained English LM fixed. We then fine-tune both English and target model to obtain the bilingual LM. We apply our approach to autoencoding language models with masked language model objective and show the advantage of the proposed approach in zero-shot transfer. Our main contributions in this work are:",
"We propose a fast adaptation method for obtaining a bilingual BERT$_{\\textsc {base}}$ of English and a target language within a day using one Tesla V100 16GB GPU.",
"We evaluate our bilingual LMs for six languages on two zero-shot cross-lingual transfer tasks, namely natural language inference BIBREF9 and universal dependency parsing. We show that our models offer competitive performance or even better that mBERT.",
"We illustrate that our bilingual LMs can serve as an excellent feature extractor in supervised dependency parsing task."
],
[
"We first provide some background of pre-trained language models. Let $_e$ be English word-embeddings and $\\Psi ()$ be the Transformer BIBREF10 encoder with parameters $$. Let $_{w_i}$ denote the embedding of word $w_i$ (i.e., $_{w_i} = _e[w_1]$). We omit positional embeddings and bias for clarity. A pre-trained LM typically performs the following computations: (i) transform a sequence of input tokens to contextualized representations $[_{w_1},\\dots ,_{w_n}] = \\Psi (_{w_1}, \\dots , _{w_n}; )$, and (ii) predict an output word $y_i$ at $i^{\\text{th}}$ position $p(y_i | _{w_i}) \\propto \\exp (_{w_i}^\\top _{y_i})$.",
"Autoencoding LM BIBREF0 corrupts some input tokens $w_i$ by replacing them with a special token [MASK]. It then predicts the original tokens $y_i = w_i$ from the corrupted tokens. Autoregressive LM BIBREF3 predicts the next token ($y_i = w_{i+1}$) given all the previous tokens. The recently proposed XLNet model BIBREF5 is an autoregressive LM that factorizes output with all possible permutations, which shows empirical performance improvement over GPT-2 due to the ability to capture bidirectional context. Here we assume that the encoder performs necessary masking with respect to each training objective.",
"Given an English pre-trained LM, we wish to learn a bilingual LM for English and a given target language $f$ under a limited computational resource budget. To quickly build a bilingual LM, we directly adapt the English pre-traind model to the target model. Our approach consists of three steps. First, we initialize target language word-embeddings $_f$ in the English embedding space such that embeddings of a target word and its English equivalents are close together (§SECREF8). Next, we create a target LM from the target embeddings and the English encoder $\\Psi ()$. We then fine-tune target embeddings while keeping $\\Psi ()$ fixed (§SECREF14). Finally, we construct a bilingual LM of $_e$, $_f$, and $\\Psi ()$ and fine-tune all the parameters (§SECREF15). Figure FIGREF7 illustrates the last two steps in our approach."
],
[
"Our approach to learn the initial foreign word embeddings $_f \\in ^{|V_f| \\times d}$ is based on the idea of mapping the trained English word embeddings $_e \\in ^{|V_e| \\times d}$ to $_f$ such that if a foreign word and an English word are similar in meaning then their embeddings are similar. Borrowing the idea of universal lexical sharing from BIBREF11, we represent each foreign word embedding $_f[i] \\in ^d$ as a linear combination of English word embeddings $_e[j] \\in ^d$",
"where $_i\\in ^{|V_e|}$ is a sparse vector and $\\sum _j^{|V_e|} \\alpha _{ij} = 1$.",
"In this step of initializing foreign embeddings, having a good estimation of $$ could speed of the convergence when tuning the foreign model and enable zero-shot transfer (§SECREF5). In the following, we discuss how to estimate $_i\\;\\forall i\\in \\lbrace 1,2, \\dots , |V_f|\\rbrace $ under two scenarios: (i) we have parallel data of English-foreign, and (ii) we only rely on English and foreign monolingual data."
],
[
"Given an English-foreign parallel corpus, we can estimate word translation probability $p(e\\,|\\,f)$ for any (English-foreign) pair $(e, f)$ using popular word-alignment BIBREF12 toolkits such as fast-align BIBREF13. We then assign:",
"Since $_i$ is estimated from word alignment, it is a sparse vector."
],
[
"For low resource languages, parallel data may not be available. In this case, we rely only on monolingual data (e.g., Wikipedias). We estimate word translation probabilities from word embeddings of the two languages. Word vectors of these languages can be learned using fastText BIBREF14 and then are aligned into a shared space with English BIBREF15, BIBREF16. Unlike learning contextualized representations, learning word vectors is fast and computationally cheap. Given the aligned vectors $\\bar{}_f$ of foreign and $\\bar{}_e$ of English, we calculate the word translation matrix $\\in ^{|V_f|\\times |V_e|}$ as",
"Here, we use $\\mathrm {sparsemax}$ BIBREF17 instead of softmax. Sparsemax is a sparse version of softmax and it puts zero probabilities on most of the word in the English vocabulary except few English words that are similar to a given foreign word. This property is desirable in our approach since it leads to a better initialization of the foreign embeddings."
],
[
"After initializing foreign word-embeddings, we replace English word-embeddings in the English pre-trained LM with foreign word-embeddings to obtain the foreign LM. We then fine-tune only foreign word-embeddings on monolingual data. The training objective is the same as the training objective of the English pre-trained LM (i.e., masked LM for BERT). Since the trained encoder $\\Psi ()$ is good at capturing association, the purpose of this step is to further optimize target embeddings such that the target LM can utilized the trained encoder for association task. For example, if the words Albert Camus presented in a French input sequence, the self-attention in the encoder more likely attends to words absurde and existentialisme once their embeddings are tuned."
],
[
"We create a bilingual LM by plugging foreign language specific parameters to the pre-trained English LM (Figure FIGREF7). The new model has two separate embedding layers and output layers, one for English and one for foreign language. The encoder layer in between is shared. We then fine-tune this model using English and foreign monolingual data. Here, we keep tuning the model on English to ensure that it does not forget what it has learned in English and that we can use the resulting model for zero-shot transfer (§SECREF3). In this step, the encoder parameters are also updated so that in can learn syntactic aspects (i.e., word order, morphological agreement) of the target languages."
],
[
"We build our bilingual LMs, named RAMEN, starting from BERT$_{\\textsc {base}}$, BERT$_{\\textsc {large}}$, RoBERTa$_{\\textsc {base}}$, and RoBERTa$_{\\textsc {large}}$ pre-trained models. Using BERT$_{\\textsc {base}}$ allows us to compare the results with mBERT model. Using BERT$_{\\textsc {large}}$ and RoBERTa allows us to investigate whether the performance of the target LM correlates with the performance of the source LM. We evaluate our models on two cross-lingual zero-shot tasks: (1) Cross-lingual Natural Language Inference (XNLI) and (2) dependency parsing."
],
[
"We evaluate our approach for six target languages: French (fr), Russian (ru), Arabic (ar), Chinese (zh), Hindi (hi), and Vietnamese (vi). These languages belong to four different language families. French, Russian, and Hindi are Indo-European languages, similar to English. Arabic, Chinese, and Vietnamese belong to Afro-Asiatic, Sino-Tibetan, and Austro-Asiatic family respectively. The choice of the six languages also reflects different training conditions depending on the amount of monolingual data. French and Russian, and Arabic can be regarded as high resource languages whereas Hindi has far less data and can be considered as low resource.",
"For experiments that use parallel data to initialize foreign specific parameters, we use the same datasets in the work of BIBREF6. Specifically, we use United Nations Parallel Corpus BIBREF18 for en-ru, en-ar, en-zh, and en-fr. We collect en-hi parallel data from IIT Bombay corpus BIBREF19 and en-vi data from OpenSubtitles 2018. For experiments that use only monolingual data to initialize foreign parameters, instead of training word-vectors from the scratch, we use the pre-trained word vectors from fastText BIBREF14 to estimate word translation probabilities (Eq. DISPLAY_FORM13). We align these vectors into a common space using orthogonal Procrustes BIBREF20, BIBREF15, BIBREF16. We only use identical words between the two languages as the supervised signal. We use WikiExtractor to extract extract raw sentences from Wikipedias as monolingual data for fine-tuning target embeddings and bilingual LMs (§SECREF15). We do not lowercase or remove accents in our data preprocessing pipeline.",
"We tokenize English using the provided tokenizer from pre-trained models. For target languages, we use fastBPE to learn 30,000 BPE codes and 50,000 codes when transferring from BERT and RoBERTa respectively. We truncate the BPE vocabulary of foreign languages to match the size of the English vocabulary in the source models. Precisely, the size of foreign vocabulary is set to 32,000 when transferring from BERT and 50,000 when transferring from RoBERTa.",
"We use XNLI dataset BIBREF9 for classification task and Universal Dependencies v2.4 BIBREF21 for parsing task. Since a language might have more than one treebank in Universal Dependencies, we use the following treebanks: en_ewt (English), fr_gsd (French), ru_syntagrus (Russian) ar_padt (Arabic), vi_vtb (Vietnamese), hi_hdtb (Hindi), and zh_gsd (Chinese)."
],
[
"BIBREF22 show that sharing subwords between languages improves alignments between embedding spaces. BIBREF2 observe a strong correlation between the percentage of overlapping subwords and mBERT's performances for cross-lingual zero-shot transfer. However, in our current approach, subwords between source and target are not shared. A subword that is in both English and foreign vocabulary has two different embeddings."
],
[
"Since pre-trained models operate on subword level, we need to estimate subword translation probabilities. Therefore, we subsample 2M sentence pairs from each parallel corpus and tokenize the data into subwords before running fast-align BIBREF13.",
"Estimating subword translation probabilities from aligned word vectors requires an additional processing step since the provided vectors from fastText are not at subword level. We use the following approximation to obtain subword vectors: the vector $_s$ of subword $s$ is the weighted average of all the aligned word vectors $_{w_i}$ that have $s$ as an subword",
"where $p(w_j)$ is the unigram probability of word $w_j$ and $n_s = \\sum _{w_j:\\, s\\in w_j} p(w_j)$. We take the top 50,000 words in each aligned word-vectors to compute subword vectors.",
"In both cases, not all the words in the foreign vocabulary can be initialized from the English word-embeddings. Those words are initialized randomly from a Gaussian $\\mathcal {N}(0, {1}{d^2})$."
],
[
"In all the experiments, we tune RAMEN$_{\\textsc {base}}$ for 175,000 updates and RAMEN$_{\\textsc {large}}$ for 275,000 updates where the first 25,000 updates are for language specific parameters. The sequence length is set to 256. The mini-batch size are 64 and 24 when tuning language specific parameters using RAMEN$_{\\textsc {base}}$ and RAMEN$_{\\textsc {large}}$ respectively. For tuning bilingual LMs, we use a mini-batch size of 64 for RAMEN$_{\\textsc {base}}$ and 24 for RAMEN$_{\\textsc {large}}$ where half of the batch are English sequences and the other half are foreign sequences. This strategy of balancing mini-batch has been used in multilingual neural machine translation BIBREF23, BIBREF24.",
"We optimize RAMEN$_{\\textsc {base}}$ using Lookahead optimizer BIBREF25 wrapped around Adam with the learning rate of $10^{-4}$, the number of fast weight updates $k=5$, and interpolation parameter $\\alpha =0.5$. We choose Lookahead optimizer because it has been shown to be robust to the initial parameters of the based optimizer (Adam). For Adam optimizer, we linearly increase the learning rate from $10^{-7}$ to $10^{-4}$ in the first 4000 updates and then follow an inverse square root decay. All RAMEN$_{\\textsc {large}}$ models are optimized with Adam due to memory limit.",
"When fine-tuning RAMEN on XNLI and UD, we use a mini-batch size of 32, Adam's learning rate of $10^{-5}$. The number of epochs are set to 4 and 50 for XNLI and UD tasks respectively. All experiments are carried out on a single Tesla V100 16GB GPU. Each RAMEN$_{\\textsc {base}}$ model is trained within a day and each RAMEN$_{\\textsc {large}}$ is trained within two days."
],
[
"In this section, we present the results of out models for two zero-shot cross lingual transfer tasks: XNLI and dependency parsing."
],
[
"Table TABREF32 shows the XNLI test accuracy. For reference, we also include the scores from the previous work, notably the state-of-the-art system XLM BIBREF6. Before discussing the results, we spell out that the fairest comparison in this experiment is the comparison between mBERT and RAMEN$_{\\textsc {base}}$+BERT trained with monolingual only.",
"We first discuss the transfer results from BERT. Initialized from fastText vectors, RAMEN$_{\\textsc {base}}$ slightly outperforms mBERT by 1.9 points on average and widen the gap of 3.3 points on Arabic. RAMEN$_{\\textsc {base}}$ gains extra 0.8 points on average when initialized from parallel data. With triple number of parameters, RAMEN$_{\\textsc {large}}$ offers an additional boost in term of accuracy and initialization with parallel data consistently improves the performance. It has been shown that BERT$_{\\textsc {large}}$ significantly outperforms BERT$_{\\textsc {base}}$ on 11 English NLP tasks BIBREF0, the strength of BERT$_{\\textsc {large}}$ also shows up when adapted to foreign languages.",
"Transferring from RoBERTa leads to better zero-shot accuracies. With the same initializing condition, RAMEN$_{\\textsc {base}}$+RoBERTa outperforms RAMEN$_{\\textsc {base}}$+BERT on average by 2.9 and 2.3 points when initializing from monolingual and parallel data respectively. This result show that with similar number of parameters, our approach benefits from a better English pre-trained model. When transferring from RoBERTa$_{\\textsc {large}}$, we obtain state-of-the-art results for five languages.",
"Currently, RAMEN only uses parallel data to initialize foreign embeddings. RAMEN can also exploit parallel data through translation objective proposed in XLM. We believe that by utilizing parallel data during the fine-tuning of RAMEN would bring additional benefits for zero-shot tasks. We leave this exploration to future work. In summary, starting from BERT$_{\\textsc {base}}$, our approach obtains competitive bilingual LMs with mBERT for zero-shot XNLI. Our approach shows the accuracy gains when adapting from a better pre-trained model."
],
[
"We build on top of RAMEN a graph-based dependency parser BIBREF27. For the purpose of evaluating the contextual representations learned by our model, we do not use part-of-speech tags. Contextualized representations are directly fed into Deep-Biaffine layers to predict arc and label scores. Table TABREF34 presents the Labeled Attachment Scores (LAS) for zero-shot dependency parsing.",
"We first look at the fairest comparison between mBERT and monolingually initialized RAMEN$_{\\textsc {base}}$+BERT. The latter outperforms the former on five languages except Arabic. We observe the largest gain of +5.2 LAS for French. Chinese enjoys +3.1 LAS from our approach. With similar architecture (12 or 24 layers) and initialization (using monolingual or parallel data), RAMEN+RoBERTa performs better than RAMEN+BERT for most of the languages. Arabic and Hindi benefit the most from bigger models. For the other four languages, RAMEN$_{\\textsc {large}}$ renders a modest improvement over RAMEN$_{\\textsc {base}}$."
],
[
"Initializing foreign embeddings is the backbone of our approach. A good initialization leads to better zero-shot transfer results and enables fast adaptation. To verify the importance of a good initialization, we train a RAMEN$_{\\textsc {base}}$+RoBERTa with foreign word-embeddings are initialized randomly from $\\mathcal {N}(0, {1}{d^2})$. For a fair comparison, we use the same hyper-parameters in §SECREF27. Table TABREF36 shows the results of XNLI and UD parsing of random initialization. In comparison to the initialization using aligned fastText vectors, random initialization decreases the zero-shot performance of RAMEN$_{\\textsc {base}}$ by 15.9% for XNLI and 27.8 points for UD parsing on average. We also see that zero-shot parsing of SOV languages (Arabic and Hindi) suffers random initialization."
],
[
"All the RAMEN models are built from English and tuned on English for zero-shot cross-lingual tasks. It is reasonable to expect RAMENs do well in those tasks as we have shown in our experiments. But are they also a good feature extractor for supervised tasks? We offer a partial answer to this question by evaluating our model for supervised dependency parsing on UD datasets.",
"We used train/dev/test splits provided in UD to train and evaluate our RAMEN-based parser. Table TABREF38 summarizes the results (LAS) of our supervised parser. For a fair comparison, we choose mBERT as the baseline and all the RAMEN models are initialized from aligned fastText vectors. With the same architecture of 12 Transformer layers, RAMEN$_{\\textsc {base}}$+BERT performs competitive to mBERT and outshines mBERT by +1.2 points for Vietnamese. The best LAS results are obtained by RAMEN$_{\\textsc {large}}$+RoBERTa with 24 Transformer layers. Overall, our results indicate the potential of using contextual representations from RAMEN for supervised tasks."
],
[
"We evaluate the performance of RAMEN+RoBERTa$_{\\textsc {base}}$ (initialized from monolingual data) at each training steps: initialization of word embeddings (0K update), fine-tuning target embeddings (25K), and fine-tuning the model on both English and target language (at each 25K updates). The results are presented in Figure FIGREF40.",
"Without fine-tuning, the average accuracy of XLNI is 39.7% for a three-ways classification task, and the average LAS score is 3.6 for dependency parsing. We see the biggest leap in the performance after 50K updates. While semantic similarity task profits significantly at 25K updates of the target embeddings, syntactic task benefits with further fine-tuning the encoder. This is expected since the target languages might exhibit different syntactic structures than English and fine-tuning encoder helps to capture language specific structures. We observe a substantial gain of 19-30 LAS for all languages except French after 50K updates.",
"Language similarities have more impact on transferring syntax than semantics. Without tuning the English encoder, French enjoys 50.3 LAS for being closely related to English, whereas Arabic and Hindi, SOV languages, modestly reach 4.2 and 6.4 points using the SVO encoder. Although Chinese has SVO order, it is often seen as head-final while English is strong head-initial. Perhaps, this explains the poor performance for Chinese."
],
[
"While we have successfully adapted autoencoding pre-trained LMs from English to other languages, the question whether our approach can also be applied for autoregressive LM such as XLNet still remains. We leave the investigation to future work."
],
[
"In this work, we have presented a simple and effective approach for rapidly building a bilingual LM under a limited computational budget. Using BERT as the starting point, we demonstrate that our approach produces better than mBERT on two cross-lingual zero-shot sentence classification and dependency parsing. We find that the performance of our bilingual LM, RAMEN, correlates with the performance of the original pre-trained English models. We also find that RAMEN is also a powerful feature extractor in supervised dependency parsing. Finally, we hope that our work sparks of interest in developing fast and effective methods for transferring pre-trained English models to other languages."
]
],
"section_name": [
"Introduction",
"Bilingual Pre-trained LMs",
"Bilingual Pre-trained LMs ::: Initializing Target Embeddings",
"Bilingual Pre-trained LMs ::: Initializing Target Embeddings ::: Learning from Parallel Corpus",
"Bilingual Pre-trained LMs ::: Initializing Target Embeddings ::: Learning from Monolingual Corpus",
"Bilingual Pre-trained LMs ::: Fine-tuning Target Embeddings",
"Bilingual Pre-trained LMs ::: Fine-tuning Bilingual LM",
"Zero-shot Experiments",
"Zero-shot Experiments ::: Data",
"Zero-shot Experiments ::: Data ::: Remark on BPE",
"Zero-shot Experiments ::: Estimating translation probabilities",
"Zero-shot Experiments ::: Hyper-parameters",
"Results",
"Results ::: Cross-lingual Natural Language Inference",
"Results ::: Universal Dependency Parsing",
"Analysis ::: Impact of initialization",
"Analysis ::: Are contextual representations from RAMEN also good for supervised parsing?",
"Analysis ::: How does linguistic knowledge transfer happen through each training stages?",
"Limitations",
"Conclusions"
]
} | {
"answers": [
{
"annotation_id": [
"c24fabe7f9f9f9591b6c57782f6904bf1c930aea"
],
"answer": [
{
"evidence": [
"Do multilingual models always need to be trained from scratch? Can we transfer linguistic knowledge learned by English pre-trained models to other languages? In this work, we develop a technique to rapidly transfer an existing pre-trained model from English to other languages in an energy efficient way BIBREF8. As the first step, we focus on building a bilingual language model (LM) of English and a target language. Starting from a pre-trained English LM, we learn the target language specific parameters (i.e., word embeddings), while keeping the encoder layers of the pre-trained English LM fixed. We then fine-tune both English and target model to obtain the bilingual LM. We apply our approach to autoencoding language models with masked language model objective and show the advantage of the proposed approach in zero-shot transfer. Our main contributions in this work are:"
],
"extractive_spans": [],
"free_form_answer": "No data. Pretrained model is used.",
"highlighted_evidence": [
"In this work, we develop a technique to rapidly transfer an existing pre-trained model from English to other languages in an energy efficient way BIBREF8. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"a3d3604691f4c6bb83b436a2db8b4ad2124a3ac2",
"e8eac861373de62f92bb8ccec47bd9efce254fa9"
],
"answer": [
{
"evidence": [
"We evaluate our approach for six target languages: French (fr), Russian (ru), Arabic (ar), Chinese (zh), Hindi (hi), and Vietnamese (vi). These languages belong to four different language families. French, Russian, and Hindi are Indo-European languages, similar to English. Arabic, Chinese, and Vietnamese belong to Afro-Asiatic, Sino-Tibetan, and Austro-Asiatic family respectively. The choice of the six languages also reflects different training conditions depending on the amount of monolingual data. French and Russian, and Arabic can be regarded as high resource languages whereas Hindi has far less data and can be considered as low resource."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"French and Russian, and Arabic can be regarded as high resource languages whereas Hindi has far less data and can be considered as low resource."
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"We evaluate our approach for six target languages: French (fr), Russian (ru), Arabic (ar), Chinese (zh), Hindi (hi), and Vietnamese (vi). These languages belong to four different language families. French, Russian, and Hindi are Indo-European languages, similar to English. Arabic, Chinese, and Vietnamese belong to Afro-Asiatic, Sino-Tibetan, and Austro-Asiatic family respectively. The choice of the six languages also reflects different training conditions depending on the amount of monolingual data. French and Russian, and Arabic can be regarded as high resource languages whereas Hindi has far less data and can be considered as low resource."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"The choice of the six languages also reflects different training conditions depending on the amount of monolingual data. French and Russian, and Arabic can be regarded as high resource languages whereas Hindi has far less data and can be considered as low resource."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"76f87dcf104d0a434e35c3bcb4049af36e695de4",
"c2282e51fc879f41e4bcff256977020b6ab58588"
],
"answer": [
{
"evidence": [
"We evaluate our approach for six target languages: French (fr), Russian (ru), Arabic (ar), Chinese (zh), Hindi (hi), and Vietnamese (vi). These languages belong to four different language families. French, Russian, and Hindi are Indo-European languages, similar to English. Arabic, Chinese, and Vietnamese belong to Afro-Asiatic, Sino-Tibetan, and Austro-Asiatic family respectively. The choice of the six languages also reflects different training conditions depending on the amount of monolingual data. French and Russian, and Arabic can be regarded as high resource languages whereas Hindi has far less data and can be considered as low resource."
],
"extractive_spans": [
"French (fr), Russian (ru), Arabic (ar), Chinese (zh), Hindi (hi), and Vietnamese (vi)"
],
"free_form_answer": "",
"highlighted_evidence": [
"We evaluate our approach for six target languages: French (fr), Russian (ru), Arabic (ar), Chinese (zh), Hindi (hi), and Vietnamese (vi). "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We evaluate our approach for six target languages: French (fr), Russian (ru), Arabic (ar), Chinese (zh), Hindi (hi), and Vietnamese (vi). These languages belong to four different language families. French, Russian, and Hindi are Indo-European languages, similar to English. Arabic, Chinese, and Vietnamese belong to Afro-Asiatic, Sino-Tibetan, and Austro-Asiatic family respectively. The choice of the six languages also reflects different training conditions depending on the amount of monolingual data. French and Russian, and Arabic can be regarded as high resource languages whereas Hindi has far less data and can be considered as low resource."
],
"extractive_spans": [
"French (fr), Russian (ru), Arabic (ar), Chinese (zh), Hindi (hi), and Vietnamese (vi)"
],
"free_form_answer": "",
"highlighted_evidence": [
"We evaluate our approach for six target languages: French (fr), Russian (ru), Arabic (ar), Chinese (zh), Hindi (hi), and Vietnamese (vi)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"bda9755353a74eb09a5f8a1b36b90a6ab7d0a038"
],
"answer": [
{
"evidence": [
"Do multilingual models always need to be trained from scratch? Can we transfer linguistic knowledge learned by English pre-trained models to other languages? In this work, we develop a technique to rapidly transfer an existing pre-trained model from English to other languages in an energy efficient way BIBREF8. As the first step, we focus on building a bilingual language model (LM) of English and a target language. Starting from a pre-trained English LM, we learn the target language specific parameters (i.e., word embeddings), while keeping the encoder layers of the pre-trained English LM fixed. We then fine-tune both English and target model to obtain the bilingual LM. We apply our approach to autoencoding language models with masked language model objective and show the advantage of the proposed approach in zero-shot transfer. Our main contributions in this work are:"
],
"extractive_spans": [],
"free_form_answer": "Build a bilingual language model, learn the target language specific parameters starting from a pretrained English LM , fine-tune both English and target model to obtain the bilingual LM.",
"highlighted_evidence": [
" In this work, we develop a technique to rapidly transfer an existing pre-trained model from English to other languages in an energy efficient way BIBREF8. As the first step, we focus on building a bilingual language model (LM) of English and a target language. Starting from a pre-trained English LM, we learn the target language specific parameters (i.e., word embeddings), while keeping the encoder layers of the pre-trained English LM fixed. We then fine-tune both English and target model to obtain the bilingual LM. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"0549cb82a8a4b771ba62bc1ebeb14724465fb787",
"e4ce290c8b29d78fdcdf537489c4ebabf62dd04e"
],
"answer": [
{
"evidence": [
"Since pre-trained models operate on subword level, we need to estimate subword translation probabilities. Therefore, we subsample 2M sentence pairs from each parallel corpus and tokenize the data into subwords before running fast-align BIBREF13.",
"We build on top of RAMEN a graph-based dependency parser BIBREF27. For the purpose of evaluating the contextual representations learned by our model, we do not use part-of-speech tags. Contextualized representations are directly fed into Deep-Biaffine layers to predict arc and label scores. Table TABREF34 presents the Labeled Attachment Scores (LAS) for zero-shot dependency parsing."
],
"extractive_spans": [
"translation probabilities",
"Labeled Attachment Scores (LAS)"
],
"free_form_answer": "",
"highlighted_evidence": [
"Since pre-trained models operate on subword level, we need to estimate subword translation probabilities.",
"For the purpose of evaluating the contextual representations learned by our model, we do not use part-of-speech tags. Contextualized representations are directly fed into Deep-Biaffine layers to predict arc and label scores. Table TABREF34 presents the Labeled Attachment Scores (LAS) for zero-shot dependency parsing."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Table TABREF32 shows the XNLI test accuracy. For reference, we also include the scores from the previous work, notably the state-of-the-art system XLM BIBREF6. Before discussing the results, we spell out that the fairest comparison in this experiment is the comparison between mBERT and RAMEN$_{\\textsc {base}}$+BERT trained with monolingual only.",
"We build on top of RAMEN a graph-based dependency parser BIBREF27. For the purpose of evaluating the contextual representations learned by our model, we do not use part-of-speech tags. Contextualized representations are directly fed into Deep-Biaffine layers to predict arc and label scores. Table TABREF34 presents the Labeled Attachment Scores (LAS) for zero-shot dependency parsing."
],
"extractive_spans": [
"accuracy",
"Labeled Attachment Scores (LAS)"
],
"free_form_answer": "",
"highlighted_evidence": [
"Table TABREF32 shows the XNLI test accuracy.",
"Table TABREF34 presents the Labeled Attachment Scores (LAS) for zero-shot dependency parsing."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"b894224c621c89845243b3d62d543ceaad9bfeb1"
],
"answer": [
{
"evidence": [
"For experiments that use parallel data to initialize foreign specific parameters, we use the same datasets in the work of BIBREF6. Specifically, we use United Nations Parallel Corpus BIBREF18 for en-ru, en-ar, en-zh, and en-fr. We collect en-hi parallel data from IIT Bombay corpus BIBREF19 and en-vi data from OpenSubtitles 2018. For experiments that use only monolingual data to initialize foreign parameters, instead of training word-vectors from the scratch, we use the pre-trained word vectors from fastText BIBREF14 to estimate word translation probabilities (Eq. DISPLAY_FORM13). We align these vectors into a common space using orthogonal Procrustes BIBREF20, BIBREF15, BIBREF16. We only use identical words between the two languages as the supervised signal. We use WikiExtractor to extract extract raw sentences from Wikipedias as monolingual data for fine-tuning target embeddings and bilingual LMs (§SECREF15). We do not lowercase or remove accents in our data preprocessing pipeline."
],
"extractive_spans": [
"United Nations Parallel Corpus",
"IIT Bombay corpus",
"OpenSubtitles 2018"
],
"free_form_answer": "",
"highlighted_evidence": [
"For experiments that use parallel data to initialize foreign specific parameters, we use the same datasets in the work of BIBREF6. Specifically, we use United Nations Parallel Corpus BIBREF18 for en-ru, en-ar, en-zh, and en-fr. We collect en-hi parallel data from IIT Bombay corpus BIBREF19 and en-vi data from OpenSubtitles 2018."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
],
"nlp_background": [
"two",
"two",
"two",
"two",
"two",
"two"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no",
"no"
],
"question": [
"How much training data from the non-English language is used by the system?",
"Is the system tested on low-resource languages?",
"What languages are the model transferred to?",
"How is the model transferred to other languages?",
"What metrics are used for evaluation?",
"What datasets are used for evaluation?"
],
"question_id": [
"f9aa055bf73185ba939dfb03454384810eb17ad1",
"d571e0b0f402a3d36fb30d70cdcd2911df883bc7",
"ce2b921e4442a21555d65d8ce4ef7e3bde931dfc",
"2275b0e195cd9cb25f50c5c570da97a4cce5dca8",
"37f8c034a14c7b4d0ab2e0ed1b827cc0eaa71ac6",
"d01c51155e4719bf587d114bcd403b273c77246f"
],
"question_writer": [
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe",
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe",
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe",
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe",
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe",
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe"
],
"search_query": [
"",
"",
"",
"",
"",
""
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 1: Pictorial illustration of our approach. Left: fine-tune language specific parameters while keeping transformer encoder fixed. Right: jointly train a bilingual LM and update all the parameters.",
"Table 1: Zero-shot classification results on XNLI. ì indicates parallel data is used. RAMEN only uses parallel data for initialization. The best results are marked in bold.",
"Table 2: LAS scores for zero-shot dependency parsing. ì indicates parallel data is used for initialization. Punctuation are removed during the evaluation. The best results are marked in bold.",
"Table 3: Comparison between random initialization (rnd) of language specific parameters and initialization using aligned fastText vectors (vec).",
"Table 4: Evaluation in supervised UD parsing. The scores are LAS.",
"Figure 2: Accuracy and LAS evaluated at each checkpoints.",
"Table 5: UAS/LAS scores for zero-shot dependency parsing. ì indicates parallel data is used for initialization. Punctuation are removed during the evaluation.",
"Table 6: Comparison between b-BERT trained from scratch for 1,000,000 updates and RAMEN trained for 175,000 updates."
],
"file": [
"2-Figure1-1.png",
"6-Table1-1.png",
"7-Table2-1.png",
"7-Table3-1.png",
"7-Table4-1.png",
"8-Figure2-1.png",
"11-Table5-1.png",
"11-Table6-1.png"
]
} | [
"How much training data from the non-English language is used by the system?",
"How is the model transferred to other languages?"
] | [
[
"2002.07306-Introduction-2"
],
[
"2002.07306-Introduction-2"
]
] | [
"No data. Pretrained model is used.",
"Build a bilingual language model, learn the target language specific parameters starting from a pretrained English LM , fine-tune both English and target model to obtain the bilingual LM."
] | 221 |
1810.05241 | Generating Diverse Numbers of Diverse Keyphrases | Existing keyphrase generation studies suffer from the problems of generating duplicate phrases and deficient evaluation based on a fixed number of predicted phrases. We propose a recurrent generative model that generates multiple keyphrases sequentially from a text, with specific modules that promote generation diversity. We further propose two new metrics that consider a variable number of phrases. With both existing and proposed evaluation setups, our model demonstrates superior performance to baselines on three types of keyphrase generation datasets, including two newly introduced in this work: StackExchange and TextWorld ACG. In contrast to previous keyphrase generation approaches, our model generates sets of diverse keyphrases of a variable number. | {
"paragraphs": [
[
"Keyphrase generation is the task of automatically predicting keyphrases given a source text. Desired keyphrases are often multi-word units that summarize the high-level meaning and highlight certain important topics or information of the source text. Consequently, models that can successfully perform this task should be capable of not only distilling high-level information from a document, but also locating specific, important snippets therein.",
"To make the problem even more challenging, a keyphrase may or may not be a substring of the source text (i.e., it may be present or absent). Moreover, a given source text is usually associated with a set of multiple keyphrases. Thus, keyphrase generation is an instance of the set generation problem, where both the size of the set and the size (i.e., the number of tokens in a phrase) of each element can vary depending on the source.",
"Similar to summarization, keyphrase generation is often formulated as a sequence-to-sequence (Seq2Seq) generation task in most prior studies BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . Conditioned on a source text, Seq2Seq models generate phrases individually or as a longer sequence jointed by delimiting tokens. Since standard Seq2Seq models generate only one sequence at a time, thus to generate multiple phrases, a common approach is to over-generate using beam search with a large beam width. Models are then evaluated by taking a fixed number of top predicted phrases (typically 5 or 10) and comparing them against the ground truth keyphrases.",
"Though this approach has achieved good empirical results, we argue that it suffers from two major limitations. Firstly, models that use beam search to generate multiple keyphrases generally lack the ability to determine the dynamic number of keyphrases needed for different source texts. Meanwhile, the parallelism in beam search also fails to model the inter-relation among the generated phrases, which can often result in diminished diversity in the output. Although certain existing models take output diversity into consideration during training BIBREF1 , BIBREF2 , the effort is significantly undermined during decoding due to the reliance on over-generation and phrase ranking with beam search.",
"Secondly, the current evaluation setup is rather problematic, since existing studies attempt to match a fixed number of outputs against a variable number of ground truth keyphrases. Empirically, the number of keyphrases can vary drastically for different source texts, depending on a plethora of factors including the length or genre of the text, the granularity of keyphrase annotation, etc. For the several commonly used keyphrase generation datasets, for example, the average number of keyphrases per data point can range from 5.3 to 15.7, with variances sometimes as large as 64.6 (Table TABREF1 ). Therefore, using an arbitrary, fixed number INLINEFORM0 to evaluate entire datasets is not appropriate. In fact, under this evaluation setup, the F1 score for the oracle model on the KP20k dataset is 0.858 for INLINEFORM1 and 0.626 for INLINEFORM2 , which apparently poses serious normalization issues as evaluation metrics.",
"To overcome these problems, we propose novel decoding strategies and evaluation metrics for the keyphrase generation task. The main contributions of this work are as follows:"
],
[
"Traditional keyphrase extraction has been studied extensively in past decades. In most existing literature, keyphrase extraction has been formulated as a two-step process. First, lexical features such as part-of-speech tags are used to determine a list of phrase candidates by heuristic methods BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 . Second, a ranking algorithm is adopted to rank the candidate list and the top ranked candidates are selected as keyphrases. A wide variety of methods were applied for ranking, such as bagged decision trees BIBREF8 , BIBREF9 , Multi-Layer Perceptron, Support Vector Machine BIBREF9 and PageRank BIBREF10 , BIBREF11 , BIBREF12 . Recently, BIBREF13 , BIBREF14 , BIBREF15 used sequence labeling models to extract keyphrases from text. Similarly, BIBREF16 used Pointer Networks to point to the start and end positions of keyphrases in a source text.",
"The main drawback of keyphrase extraction is that sometimes keyphrases are absent from the source text, thus an extractive model will fail predicting those keyphrases. BIBREF0 first proposed the CopyRNN, a neural generative model that both generates words from vocabulary and points to words from the source text. Recently, based on the CopyRNN architecture, BIBREF1 proposed the CorrRNN, which takes states and attention vectors from previous steps into account in both encoder and decoder to reduce duplication and improve coverage. BIBREF2 proposed semi-supervised methods by leveraging both labeled and unlabeled data for training. BIBREF3 , BIBREF2 proposed to use structure information (e.g., title of source text) to improve keyphrase generation performance. Note that none of the above works are able to generate variable number of phrases, which is one of our contributions."
],
[
"Sequence to Sequence (Seq2Seq) learning was first introduced by BIBREF17 ; together with the soft attention mechanism of BIBREF18 , it has been widely used in natural language generation tasks. BIBREF19 , BIBREF20 used a mixture of generation and pointing to overcome the problem of large vocabulary size. BIBREF21 , BIBREF22 applied Seq2Seq models on summary generation tasks, while BIBREF23 , BIBREF24 generated questions conditioned on documents and answers from machine comprehension datasets. Seq2Seq was also applied on neural sentence simplification BIBREF25 and paraphrase generation tasks BIBREF26 .",
"Given a source text consisting of INLINEFORM0 words INLINEFORM1 , the encoder converts their corresponding embeddings INLINEFORM2 into a set of INLINEFORM3 real-valued vectors INLINEFORM4 with a bidirectional GRU BIBREF27 : DISPLAYFORM0 ",
"Dropout BIBREF28 is applied to both INLINEFORM0 and INLINEFORM1 for regularization.",
"The decoder is a uni-directional GRU, which generates a new state INLINEFORM0 at each time-step INLINEFORM1 from the word embedding INLINEFORM2 and the recurrent state INLINEFORM3 : DISPLAYFORM0 ",
"The initial state INLINEFORM0 is derived from the final encoder state INLINEFORM1 by applying a single-layer feed-forward neural net (FNN): INLINEFORM2 . Dropout is applied to both the embeddings INLINEFORM3 and the GRU states INLINEFORM4 .",
"When generating token INLINEFORM0 , in order to better incorporate information from the source text, an attention mechanism BIBREF18 is employed to infer the importance INLINEFORM1 of each source word INLINEFORM2 given the current decoder state INLINEFORM3 . This importance is measured by an energy function with a 2-layer FNN: DISPLAYFORM0 ",
"The output over all decoding steps INLINEFORM0 thus define a distribution over the source sequence: DISPLAYFORM0 ",
"These attention scores are then used as weights for a refined representation of the source encodings, which is then concatenated to the decoder state INLINEFORM0 to derive a generative distribution INLINEFORM1 : DISPLAYFORM0 ",
"where the output size of INLINEFORM0 equals to the target vocabulary size. Subscript INLINEFORM1 indicates the abstractive nature of INLINEFORM2 since it is a distribution over a prescribed vocabulary.",
"We employ the pointer softmax BIBREF19 mechanism to switch between generating a token INLINEFORM0 (from a vocabulary) and pointing (to a token in the source text). Specifically, the pointer softmax module computes a scalar switch INLINEFORM1 at each generation time-step and uses it to interpolate the abstractive distribution INLINEFORM2 over the vocabulary (see Equation EQREF16 ) and the extractive distribution INLINEFORM3 over the source text tokens: DISPLAYFORM0 ",
"where INLINEFORM0 is conditioned on both the attention-weighted source representation INLINEFORM1 and the decoder state INLINEFORM2 : DISPLAYFORM0 "
],
[
"Given a piece of source text, our objective is to generate a variable number of multi-word phrases. To this end, we opt for the sequence-to-sequence framework (Seq2Seq) as the basis of our model, combined with attention and pointer softmax mechanisms in the decoder.",
"Since each data example contains one source text sequence and multiple target phrase sequences (dubbed One2Many, and each sequence can be of multi-word), two paradigms can be adopted for training Seq2Seq models. The first one BIBREF0 is to divide each One2Many data example into multiple One2One examples, and the resulting models (e.g. CopyRNN) can generate one phrase at once and must rely on beam search technique to produce more unique phrases.",
"To enable models to generate multiple phrases and control the number to output, we propose the second training paradigm One2Seq, in which we concatenate multiple phrases into a single sequence with a delimiter INLINEFORM0 SEP INLINEFORM1 , and this concatenated sequence is then used as the target for sequence generation during training. An overview of the model's structure is shown in Figure FIGREF8 ."
],
[
"In the following subsections, we use INLINEFORM0 to denote input text tokens, INLINEFORM1 to denote token embeddings, INLINEFORM2 to denote hidden states, and INLINEFORM3 to denote output text tokens. Superscripts denote time-steps in a sequence, and subscripts INLINEFORM4 and INLINEFORM5 indicate whether a variable resides in the encoder or the decoder of the model, respectively. The absence of a superscript indicates multiplicity in the time dimension. INLINEFORM6 refers to a linear transformation and INLINEFORM7 refers to it followed by a non-linear activation function INLINEFORM8 . Angled brackets, INLINEFORM9 , denote concatenation."
],
[
"There are usually multiple keyphrases for a given source text because each keyphrase represents certain aspects of the text. Therefore keyphrase diversity is desired for the keyphrase generation. Most previous keyphrase generation models generate multiple phrases by over-generation, which is highly prone to generate similar phrases due to the nature of beam search. Given our objective to generate variable numbers of keyphrases, we need to adopt new strategies for achieving better diversity in the output.",
"Recall that we represent variable numbers of keyphrases as delimiter-separated sequences. One particular issue we observed during error analysis is that the model tends to produce identical tokens following the delimiter token. For example, suppose a target sequence contains INLINEFORM0 delimiter tokens at time-steps INLINEFORM1 . During training, the model is rewarded for generating the same delimiter token at these time-steps, which presumably introduces much homogeneity in the corresponding decoder states INLINEFORM2 . When these states are subsequently used as inputs at the time-steps immediately following the delimiter, the decoder naturally produces highly similar distributions over the following tokens, resulting in identical tokens being decoded. To alleviate this problem, we propose two plug-in components for the sequential generation model.",
"We propose a mechanism called semantic coverage that focuses on the semantic representations of generated phrases. Specifically, we introduce another uni-directional recurrent model INLINEFORM0 (dubbed target encoder) which encodes decoder-generated tokens INLINEFORM1 , where INLINEFORM2 , into hidden states INLINEFORM3 . This state is then taken as an extra input to the decoder GRU, modifying Equation EQREF12 to: DISPLAYFORM0 ",
"If the target encoder were to be updated with the training signal from generation (i.e., backpropagating error from the decoder GRU to the target encoder), the resulting decoder is essentially a 2-layer GRU with residual connections. Instead, inspired by previous representation learning works BIBREF29 , BIBREF30 , BIBREF31 , we train the target encoder in an self-supervised fashion (Figure FIGREF8 ). That is, we extract target encoder's final hidden state vector INLINEFORM0 , where INLINEFORM1 is the length of target sequence, and use it as a general representation of the target phrases. We train by maximizing the mutual information between these phrase representations and the final state of the source encoder INLINEFORM2 as follows. For each phrase representation vector INLINEFORM3 , we take the enocdings INLINEFORM4 of INLINEFORM5 different source texts, where INLINEFORM6 is the encoder representation for the current source text, and the remaining INLINEFORM7 are negative samples (sampled at random) from the training data. The target encoder is trained to minimize the classification loss: DISPLAYFORM0 ",
"where INLINEFORM0 is bi-linear transformation.",
"The motivation here is to constrain the overall representation of generated keyphrase to be semantically close to the overall meaning of the source text. With such representations as input to the decoder, the semantic coverage mechanism can potentially help to provide useful keyphrase information and guide generation.",
"We also propose orthogonal regularization, which explicitly encourages the delimiter-generating decoder states to be different from each other. This is inspired by BIBREF32 , who use orthogonal regularization to encourage representations across domains to be as distinct as possible. Specifically, we stack the decoder hidden states corresponding to delimiters together to form matrix INLINEFORM0 and use the following equation as the orthogonal regularization loss: DISPLAYFORM0 ",
"where INLINEFORM0 is the matrix transpose of INLINEFORM1 , INLINEFORM2 is the identity matrix of rank INLINEFORM3 , INLINEFORM4 indicates element wise multiplication, INLINEFORM5 indicates INLINEFORM6 norm of each element in a matrix INLINEFORM7 . This loss function prefers orthogonality among the hidden states INLINEFORM8 and thus improves diversity in the tokens following the delimiters.",
"We adopt the widely used negative log-likelihood loss in our sequence generation model, denoted as INLINEFORM0 . The overall loss we use in our model is DISPLAYFORM0 ",
"where INLINEFORM0 and INLINEFORM1 are hyper-parameters."
],
[
"According to different task requirements, various decoding methods can be applied to generate the target sequence INLINEFORM0 . Prior studies BIBREF0 , BIBREF7 focus more on generating excessive number of phrases by leveraging beam search to proliferate the output phrases. In contrast, models trained under One2Seq paradigm are capable of determining the proper number of phrases to output. In light of previous research in psychology BIBREF33 , BIBREF34 , we name these two decoding/search strategies as Exhaustive Decoding and Self-terminating Decoding, respectively, due to their resemblance to the way humans behave in serial memory tasks. Simply speaking, the major difference lies in whether a model is capable of controlling the number of phrases to output. We describe the detailed decoding strategies used in this study as follows:",
"As traditional keyphrase tasks evaluate models with a fixed number of top-ranked predictions (say F-score @5 and @10), existing keyphrase generation studies have to over-generate phrases by means of beam search (commonly with a large beam size, e.g., 150 and 200 in BIBREF3 , BIBREF0 , respectively), a heuristic search algorithm that returns INLINEFORM0 approximate optimal sequences. For the One2One setting, each returned sequence is a unique phrase itself. But for One2Seq, each produced sequence contains several phrases and additional processes BIBREF2 are needed to obtain the final unique (ordered) phrase list.",
"It is worth noting that the time complexity of beam search is INLINEFORM0 , where INLINEFORM1 is the beam width, and INLINEFORM2 is the maximum length of generated sequences. Therefore the exhaustive decoding is generally very computationally expensive, especially for One2Seq setting where INLINEFORM3 is much larger than in One2One. It is also wasteful as we observe that less than 5% of phrases generated by One2Seq models are unique.",
"An innate characteristic of keyphrase tasks is that the number of keyphrases varies depending on the document and dataset genre, therefore dynamically outputting a variable number of phrases is a desirable property for keyphrase generation models. Since our proposed model is trained to generate a variable number of phrases as a single sequence joined by delimiters, we can obtain multiple phrases by simply decoding a single sequence for each given source text. The resulting model thus implicitly performs the additional task of dynamically estimating the proper size of the target phrase set: once the model believes that an adequate number of phrases have been generated, it outputs a special token INLINEFORM0 EOS INLINEFORM1 to terminate the decoding process.",
"One notable attribute of the self-terminating decoding strategy is that, by generating a set of phrases in a single sequence, the model conditions its current generation on all previously generated phrases. Compared to the exhaustive strategy (i.e., phrases being generated independently by beam search in parallel), our model can model the dependency among its output in a more explicit fashion. Additionally, since multiple phrases are decoded as a single sequence, decoding can be performed more efficiently than exhaustive decoding by conducting greedy search or beam search on only the top-scored sequence."
],
[
"Formally, given a source text, suppose that a model predicts a list of unique keyphrases INLINEFORM0 ordered by the quality of the predictions INLINEFORM1 , and that the ground truth keyphrases for the given source text is the oracle set INLINEFORM2 . When only the top INLINEFORM3 predictions INLINEFORM4 are used for evaluation, precision, recall, and F INLINEFORM5 score are consequently conditioned on INLINEFORM6 and defined as: DISPLAYFORM0 ",
"As discussed in Section SECREF1 , the number of generated keyphrases used for evaluation can have a critical impact on the quality of the resulting evaluation metrics. Here we compare three choices of INLINEFORM0 and the implications on keyphrase evaluation for each choice:",
"A simple remedy is to set INLINEFORM0 as a variable number which is specific to each data example. Here we define two new metrics:",
"By simply extending the constant number INLINEFORM0 to different variables accordingly, both F INLINEFORM1 @ INLINEFORM2 and F INLINEFORM3 @ INLINEFORM4 are capable of reflecting the nature of variable number of phrases for each document, and a model can achieve the maximum INLINEFORM5 score of INLINEFORM6 if and only if it predicts the exact same phrases as the ground truth. Another merit of F INLINEFORM7 @ INLINEFORM8 is that it is independent from model outputs, therefore we can use it to compare existing models."
],
[
"In this section, we report our experiment results on multiple datasets and compare with existing models. We use INLINEFORM0 to refer to the delimiter-concatenated sequence-to-sequences model described in Section SECREF3 ; INLINEFORM1 refers to the model augmented with orthogonal regularization and semantic coverage mechanism.",
"To construct target sequences for training INLINEFORM0 and INLINEFORM1 , ground truth keyphrases are sorted by their order of first occurrence in the source text. Keyphrases that do not appear in the source text are appended to the end. This order may guide the attention mechanism to attend to source positions in a smoother way. Implementation details can be found in Appendix SECREF9 .",
"We include four non-neural extractive models and CopyRNN BIBREF0 as baselines. We use CopyRNN to denote the model reported by BIBREF0 , CopyRNN* to denote our implementation of CopyRNN based on their open sourced code. To draw fair comparison with existing study, we use the same model hyperparameter setting as used in BIBREF0 and use exhaustive decoding strategy for most experiments. KEA BIBREF4 and Maui BIBREF8 are trained on a subset of 50,000 documents from either KP20k (Table TABREF35 ) or StackEx (Table TABREF37 ) instead of all documents due to implementation limits (without fine-tuning on target dataset).",
"In Section SECREF42 , we apply the self-terminating decoding strategy. Since no existing model supports such decoding strategy, we only report results from our proposed models. They can be used for comparison in future studies."
],
[
"Our first dataset consists of a collection of scientific publication datasets, namely KP20k, Inspec, Krapivin, NUS, and SemEval, that have been widely used in existing literature BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . KP20k, for example, was introduced by BIBREF0 and comprises more than half a million scientific publications. For each article, the abstract and title are used as the source text while the author keywords are used as target. The other four datasets contain much fewer articles, and thus used to test transferability of our model (without fine-tuning).",
"We report our model's performance on the present-keyphrase portion of the KP20k dataset in Table TABREF35 . To compare with previous works, we provide compute INLINEFORM0 and INLINEFORM1 scores. The new proposed F INLINEFORM2 @ INLINEFORM3 metric indicates consistent ranking with INLINEFORM4 for most cases. Due to its target number sensitivity, we find that its value is closer to INLINEFORM5 for KP20k and Krapivin where average target keyphrases is less and closer to INLINEFORM6 for the other three datasets.",
"From the result we can see that the neural-based models outperform non-neural models by large margins. Our implemented CopyRNN achieves better or comparable performance against the original model, and on NUS and SemEval the advantage is more salient.",
"As for the proposed models, both INLINEFORM0 and INLINEFORM1 yield comparable results to CopyRNN, indicating that One2Seq paradigm can work well as an alternative option for the keyphrase generation. INLINEFORM2 outperforms INLINEFORM3 on all metrics, suggesting the semantic coverage and orthogonal regularization help the model to generate higher quality keyphrases and achieve better generalizability. To our surprise, on the metric F INLINEFORM4 @10 for KP20k and Krapivin (average number of keyphrases is only 5), where high-recall models like CopyRNN are more favored, INLINEFORM5 is still able to outperform One2One baselines, indicating that the proposed mechanisms for diverse generation are effective."
],
[
"Inspired by the StackLite tag recommendation task on Kaggle, we build a new benchmark based on the public StackExchange data. We use questions with titles as source, and user-assigned tags as target keyphrases.",
"Since oftentimes the questions on StackExchange contain less information than in scientific publications, there are fewer keyphrases per data point in StackEx. Furthermore, StackExchange uses a tag recommendation system that suggests topic-relevant tags to users while submitting questions; therefore, we are more likely to see general terminology such as Linux and Java. This characteristic challenges models with respect to their ability to distill major topics of a question rather than selecting specific snippets from the text.",
"We report our models' performance on StackEx in Table TABREF37 . Results show INLINEFORM0 performs the best; on the absent-keyphrase generation tasks, it outperforms INLINEFORM1 by a large margin."
],
[
"One key advantage of our proposed model is the capability of predicting the number of keyphrases conditioned on the given source text. We thus conduct a set of experiments on KP20k and StackEx present keyphrase generation tasks, as shown in Table TABREF39 , to study such behavior. We adopt the self-terminating decoding strategy (Section SECREF28 ), and use both F INLINEFORM0 @ INLINEFORM1 and F INLINEFORM2 @ INLINEFORM3 (Section SECREF4 ) to evaluate.",
"In these experiments, we use beam search as in most Natural Language Generation (NLG) tasks, i.e., only use the top ranked prediction sequence as output. We compare the results with greedy search. Since no existing model is capable of generating variable number of keyphrases, in this subsection we only report performance on such setting from INLINEFORM0 and INLINEFORM1 .",
"From Table TABREF39 we observe that in the variable number generation setting, greedy search outperforms beam search consistently. This may because beam search tends to generate short and similar sequences. We can also see the resulting F INLINEFORM0 @ INLINEFORM1 scores are generally lower than results reported in previous subsections, this suggests an over-generation decoding strategy may still benefit from achieving higher recall."
],
[
"We conduct an ablation experiment to study the effects of orthogonal regularization and semantic coverage mechanism on INLINEFORM0 . As shown in Table TABREF44 , semantic coverage provides significant boost to INLINEFORM1 's performance on all datasets. Orthogonal regularization hurts performance when is solely applied to INLINEFORM2 model. Interestingly, when both components are enabled ( INLINEFORM3 ), the model outperforms INLINEFORM4 by a noticeable margin on all datasets, this suggests the two components help keyphrase generation in a synergetic way. One future direction is to apply orthogonal regularization directly on target encoder, since the regularizer can potentially diversify target representations at phrase level, which may further encourage diverse keyphrase generation in decoder."
],
[
"To verify our assumption that target encoding and orthogonal regularization help to boost the diversity of generated sequences, we use two metrics, one quantitative and one qualitative, to measure diversity of generation.",
"First, we simply calculate the average unique predictions produced by both INLINEFORM0 and INLINEFORM1 in experiments shown in Section SECREF36 . The resulting numbers are 20.38 and 89.70 for INLINEFORM2 and INLINEFORM3 respectively. Second, from the model running on the KP20k validation set, we randomly sample 2000 decoder hidden states at INLINEFORM4 steps following a delimiter ( INLINEFORM5 ) and apply an unsupervised clustering method (t-SNE BIBREF35 ) on them. From the Figure FIGREF46 we can see that hidden states sampled from INLINEFORM6 are easier to cluster while hidden states sampled from INLINEFORM7 yield one mass of vectors with no obvious distinct clusters. Results on both metrics suggest target encoding and orthogonal regularization indeed help diversifying generation of our model."
],
[
"To illustrate the difference of predictions between our proposed models, we show an example chosen from the KP20k validation set in Appendix SECREF10 . In this example there are 29 ground truth phrases. Neither of the models is able to generate all of the keyphrases, but it is obvious that the predictions from INLINEFORM0 all start with “test”, while predictions from INLINEFORM1 are diverse. This to some extent verifies our assumption that without the target encoder and orthogonal regularization, decoder states following delimiters are less diverse."
],
[
"We propose a recurrent generative model that sequentially generates multiple keyphrases, with two extra modules that enhance generation diversity. We propose new metrics to evaluate keyphrase generation. Our model shows competitive performance on a set of keyphrase generation datasets, including one introduced in this work. In future work, we plan to investigate how target phrase order affects the generation behavior, and further explore set generation in an order invariant fashion."
],
[
"Generating absent keyphrases on scientific publication datasets is a rather challenging problem. Existing studies often achieve seemingly good performance by measuring recall on tens and sometimes hundreds of keyphrases produced by exhaustive decoding with a large beam size — thus completely ignoring precision.",
"We report the models' R@10/50 scores on the absent portion of five scientific paper datasets in Table TABREF48 to be in line with previous studies.",
"The absent keyphrase prediction highly prefers recall-oriented models, therefore CopyRNN with beam size of 200 is innately proper for this task setting. Howerer, from the results we observe that with the help of exhaustive decoding and diverse mechanisms, INLINEFORM0 is able to perform comparably to CopyRNN model, and it generally works better for top predictions. Even though the trend of models' performance somewhat matches what we observe on the present data, we argue that it is hard to compare different models' performance on such scale. We argue that StackEx is better testbeds for absent keyphrase generation."
],
[
"Implemntation details of our proposed models are as follows. In all experiments, the word embeddings are initialized with 100-dimensional random matrices. The number of hidden units in both the encoder and decoder GRU are 150. The number of hidden units in target encoder GRU is 150. The size of vocabulary is 50,000.",
"The numbers of hidden units in MLPs described in Section SECREF3 are as follows. During negative sampling, we randomly sample 16 samples from the same batch, thus target encoding loss in Equation EQREF23 is a 17-way classification loss. In INLINEFORM0 , we set both the INLINEFORM1 and INLINEFORM2 in Equation EQREF27 to be 0.3. In all experiments, we use a dropout rate of 0.1.",
"We use Adam BIBREF36 as the step rule for optimization. The learning rate is INLINEFORM0 . The model is implemented using PyTorch BIBREF38 and OpenNMT BIBREF37 .",
"For exhaustive decoding, we use a beam size of 50 and a maximum sequence length of 40.",
"Following BIBREF0 , lowercase and stemming are performed on both the ground truth and generated keyphrases during evaluation.",
"We leave out 2,000 data examples as validation set for both KP20k and StackEx and use them to identify optimal checkpoints for testing. And all the scores reported in this paper are from checkpoints with best performances (F INLINEFORM0 @ INLINEFORM1 ) on validation set."
],
[
"See Table TABREF49 ."
]
],
"section_name": [
"Introduction",
"Keyphrase Extraction and Generation",
"Sequence to Sequence Generation",
"Model Architecture",
"Notations",
"Mechanisms for Diverse Generation",
"Decoding Strategies",
"Evaluating Keyphrase Generation",
"Datasets and Experiments",
"Experiments on Scientific Publications",
"Experiments on The StackEx Dataset",
"Generating Variable Number Keyphrases",
"Ablation Study",
"Visualizing Diversified Generation",
"Qualitative Analysis",
"Conclusion and Future Work",
"Experiment Results on KP20k Absent Subset",
"Implementation Details",
"Example Output"
]
} | {
"answers": [
{
"annotation_id": [
"f31adf3b2c4ae9267ed96efbb62b4a8461b50304"
],
"answer": [
{
"evidence": [
"To verify our assumption that target encoding and orthogonal regularization help to boost the diversity of generated sequences, we use two metrics, one quantitative and one qualitative, to measure diversity of generation.",
"First, we simply calculate the average unique predictions produced by both INLINEFORM0 and INLINEFORM1 in experiments shown in Section SECREF36 . The resulting numbers are 20.38 and 89.70 for INLINEFORM2 and INLINEFORM3 respectively. Second, from the model running on the KP20k validation set, we randomly sample 2000 decoder hidden states at INLINEFORM4 steps following a delimiter ( INLINEFORM5 ) and apply an unsupervised clustering method (t-SNE BIBREF35 ) on them. From the Figure FIGREF46 we can see that hidden states sampled from INLINEFORM6 are easier to cluster while hidden states sampled from INLINEFORM7 yield one mass of vectors with no obvious distinct clusters. Results on both metrics suggest target encoding and orthogonal regularization indeed help diversifying generation of our model.",
"To illustrate the difference of predictions between our proposed models, we show an example chosen from the KP20k validation set in Appendix SECREF10 . In this example there are 29 ground truth phrases. Neither of the models is able to generate all of the keyphrases, but it is obvious that the predictions from INLINEFORM0 all start with “test”, while predictions from INLINEFORM1 are diverse. This to some extent verifies our assumption that without the target encoder and orthogonal regularization, decoder states following delimiters are less diverse."
],
"extractive_spans": [
"average unique predictions",
"illustrate the difference of predictions between our proposed models, we show an example chosen from the KP20k validation set"
],
"free_form_answer": "",
"highlighted_evidence": [
"To verify our assumption that target encoding and orthogonal regularization help to boost the diversity of generated sequences, we use two metrics, one quantitative and one qualitative, to measure diversity of generation.",
"First, we simply calculate the average unique predictions produced by both INLINEFORM0 and INLINEFORM1 in experiments shown in Section SECREF36 .",
" In this example there are 29 ground truth phrases. Neither of the models is able to generate all of the keyphrases, but it is obvious that the predictions from INLINEFORM0 all start with “test”, while predictions from INLINEFORM1 are diverse."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"1d3d19fd314b5670a987b43a5dfd73ad65856cac",
"dabac265fba1579b3adaa384fad989b925fb797d"
],
"answer": [
{
"evidence": [
"Inspired by the StackLite tag recommendation task on Kaggle, we build a new benchmark based on the public StackExchange data. We use questions with titles as source, and user-assigned tags as target keyphrases.",
"Since oftentimes the questions on StackExchange contain less information than in scientific publications, there are fewer keyphrases per data point in StackEx. Furthermore, StackExchange uses a tag recommendation system that suggests topic-relevant tags to users while submitting questions; therefore, we are more likely to see general terminology such as Linux and Java. This characteristic challenges models with respect to their ability to distill major topics of a question rather than selecting specific snippets from the text."
],
"extractive_spans": [],
"free_form_answer": "they obtained computer science related topics by looking at titles and user-assigned tags",
"highlighted_evidence": [
"Inspired by the StackLite tag recommendation task on Kaggle, we build a new benchmark based on the public StackExchange data. We use questions with titles as source, and user-assigned tags as target keyphrases.\n\nSince oftentimes the questions on StackExchange contain less information than in scientific publications, there are fewer keyphrases per data point in StackEx. Furthermore, StackExchange uses a tag recommendation system that suggests topic-relevant tags to users while submitting questions; therefore, we are more likely to see general terminology such as Linux and Java. This characteristic challenges models with respect to their ability to distill major topics of a question rather than selecting specific snippets from the text."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"1da3dfb5eb655811b3036fb0db419b8f653688e3"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"1aa07d2820c11c6c57ee49d95f76483e197d9905",
"e3ffbd07c16e839cf11c0c81cf90adba64637d9f"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 1: Statistics of datasets we use in this work. Avg# and Var# indicate the mean and variance of numbers of target phrases per data point, respectively."
],
"extractive_spans": [],
"free_form_answer": "around 332k questions",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Statistics of datasets we use in this work. Avg# and Var# indicate the mean and variance of numbers of target phrases per data point, respectively."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"09f5672eb8306b738e641906623010589f742047",
"4c107a00e48ad5d65054723eabc634e8fd526aad"
],
"answer": [
{
"evidence": [
"We report our model's performance on the present-keyphrase portion of the KP20k dataset in Table TABREF35 . To compare with previous works, we provide compute INLINEFORM0 and INLINEFORM1 scores. The new proposed F INLINEFORM2 @ INLINEFORM3 metric indicates consistent ranking with INLINEFORM4 for most cases. Due to its target number sensitivity, we find that its value is closer to INLINEFORM5 for KP20k and Krapivin where average target keyphrases is less and closer to INLINEFORM6 for the other three datasets.",
"FLOAT SELECTED: Table 2: Present keyphrase predicting performance on KP20K test set. Compared with CopyRNN (Meng et al., 2017), Multi-Task (Ye and Wang, 2018), and TG-Net (Chen et al., 2018b)."
],
"extractive_spans": [],
"free_form_answer": "CopyRNN (Meng et al., 2017), Multi-Task (Ye and Wang, 2018), and TG-Net (Chen et al., 2018b)",
"highlighted_evidence": [
"We report our model's performance on the present-keyphrase portion of the KP20k dataset in Table TABREF35 .",
"FLOAT SELECTED: Table 2: Present keyphrase predicting performance on KP20K test set. Compared with CopyRNN (Meng et al., 2017), Multi-Task (Ye and Wang, 2018), and TG-Net (Chen et al., 2018b)."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We include four non-neural extractive models and CopyRNN BIBREF0 as baselines. We use CopyRNN to denote the model reported by BIBREF0 , CopyRNN* to denote our implementation of CopyRNN based on their open sourced code. To draw fair comparison with existing study, we use the same model hyperparameter setting as used in BIBREF0 and use exhaustive decoding strategy for most experiments. KEA BIBREF4 and Maui BIBREF8 are trained on a subset of 50,000 documents from either KP20k (Table TABREF35 ) or StackEx (Table TABREF37 ) instead of all documents due to implementation limits (without fine-tuning on target dataset)."
],
"extractive_spans": [
"CopyRNN BIBREF0",
"KEA BIBREF4 and Maui BIBREF8",
"CopyRNN*"
],
"free_form_answer": "",
"highlighted_evidence": [
"We include four non-neural extractive models and CopyRNN BIBREF0 as baselines. We use CopyRNN to denote the model reported by BIBREF0 , CopyRNN* to denote our implementation of CopyRNN based on their open sourced code. To draw fair comparison with existing study, we use the same model hyperparameter setting as used in BIBREF0 and use exhaustive decoding strategy for most experiments. KEA BIBREF4 and Maui BIBREF8 are trained on a subset of 50,000 documents from either KP20k (Table TABREF35 ) or StackEx (Table TABREF37 ) instead of all documents due to implementation limits (without fine-tuning on target dataset)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"78eb27957ed0b218e359a30b25cb20cbbb2719f4"
],
"answer": [
{
"evidence": [
"First, we simply calculate the average unique predictions produced by both INLINEFORM0 and INLINEFORM1 in experiments shown in Section SECREF36 . The resulting numbers are 20.38 and 89.70 for INLINEFORM2 and INLINEFORM3 respectively. Second, from the model running on the KP20k validation set, we randomly sample 2000 decoder hidden states at INLINEFORM4 steps following a delimiter ( INLINEFORM5 ) and apply an unsupervised clustering method (t-SNE BIBREF35 ) on them. From the Figure FIGREF46 we can see that hidden states sampled from INLINEFORM6 are easier to cluster while hidden states sampled from INLINEFORM7 yield one mass of vectors with no obvious distinct clusters. Results on both metrics suggest target encoding and orthogonal regularization indeed help diversifying generation of our model."
],
"extractive_spans": [
"average unique predictions",
"randomly sample 2000 decoder hidden states at INLINEFORM4 steps following a delimiter ( INLINEFORM5 ) and apply an unsupervised clustering method (t-SNE BIBREF35 )"
],
"free_form_answer": "",
"highlighted_evidence": [
"First, we simply calculate the average unique predictions produced by both INLINEFORM0 and INLINEFORM1 in experiments shown in Section SECREF36 ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"",
"",
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
"",
"",
""
],
"question": [
"How is keyphrase diversity measured?",
"How was the StackExchange dataset collected?",
"What does the TextWorld ACG dataset contain?",
"What is the size of the StackExchange dataset?",
"What were the baselines?",
"What two metrics are proposed?"
],
"question_id": [
"1e4dbfc556cf237accb8b370de2f164fa723687b",
"fff5c24dca92bc7d5435a2600e6764f039551787",
"b2ecfd5480a2a4be98730e2d646dfb84daedab17",
"a3efe43a72b76b8f5e5111b54393d00e6a5c97ab",
"f1e90a553a4185a4b0299bd179f4f156df798bce",
"19b7312cfdddb02c3d4eaa40301a67143a72a35a"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
"",
"",
""
]
} | {
"caption": [
"Figure 1: Overall structure of our proposed model. A represents last states of the bi-directional source encoder; B represents decoder states where target tokens are delimiters; C indicates target encoder states where input tokens are delimiters. During orthogonal regularization, all B states are used; during target encoder training, we maximize the mutual information between states A with each of the C states; red dash arrow indicates a detached path, i.e., do not back propagate through this path.",
"Table 1: Statistics of datasets we use in this work. Avg# and Var# indicate the mean and variance of numbers of target phrases per data point, respectively.",
"Table 2: Present keyphrase predicting performance on KP20K test set. Compared with CopyRNN (Meng et al., 2017), Multi-Task (Ye and Wang, 2018), and TG-Net (Chen et al., 2018b).",
"Table 3: Model performance on transfer learning datasets.",
"Table 4: Model performance on STACKEXCHANGE dataset.",
"Table 5: Model performance on TEXTWORLD ACG dataset.",
"Table 6: Example from KP20K validation set, predictions generated by catSeq and catSeqD models.",
"Figure 2: t-SNE results on decoder hidden states. Upper row: catSeq; lower row: catSeqD; column k shows hidden states sampled from tokens at k steps following a delimiter.",
"Table 7: Average number of generated keyphrase on KP20K and TEXTWORLD ACG.",
"Table 8: Example observations and admissible commands in TEXTWORLD ACG dataset."
],
"file": [
"3-Figure1-1.png",
"7-Table1-1.png",
"8-Table2-1.png",
"9-Table3-1.png",
"9-Table4-1.png",
"10-Table5-1.png",
"10-Table6-1.png",
"11-Figure2-1.png",
"14-Table7-1.png",
"15-Table8-1.png"
]
} | [
"How was the StackExchange dataset collected?",
"What is the size of the StackExchange dataset?",
"What were the baselines?"
] | [
[
"1810.05241-Experiments on The StackEx Dataset-1",
"1810.05241-Experiments on The StackEx Dataset-0"
],
[
"1810.05241-7-Table1-1.png"
],
[
"1810.05241-Datasets and Experiments-2",
"1810.05241-Experiments on Scientific Publications-1",
"1810.05241-8-Table2-1.png"
]
] | [
"they obtained computer science related topics by looking at titles and user-assigned tags",
"around 332k questions",
"CopyRNN (Meng et al., 2017), Multi-Task (Ye and Wang, 2018), and TG-Net (Chen et al., 2018b)"
] | 223 |
1909.01362 | Trouble on the Horizon: Forecasting the Derailment of Online Conversations as they Develop | Online discussions often derail into toxic exchanges between participants. Recent efforts mostly focused on detecting antisocial behavior after the fact, by analyzing single comments in isolation. To provide more timely notice to human moderators, a system needs to preemptively detect that a conversation is heading towards derailment before it actually turns toxic. This means modeling derailment as an emerging property of a conversation rather than as an isolated utterance-level event. ::: Forecasting emerging conversational properties, however, poses several inherent modeling challenges. First, since conversations are dynamic, a forecasting model needs to capture the flow of the discussion, rather than properties of individual comments. Second, real conversations have an unknown horizon: they can end or derail at any time; thus a practical forecasting model needs to assess the risk in an online fashion, as the conversation develops. In this work we introduce a conversational forecasting model that learns an unsupervised representation of conversational dynamics and exploits it to predict future derailment as the conversation develops. By applying this model to two new diverse datasets of online conversations with labels for antisocial events, we show that it outperforms state-of-the-art systems at forecasting derailment. | {
"paragraphs": [
[
"“Ché saetta previsa vien più lenta.”",
"",
"– Dante Alighieri, Divina Commedia, Paradiso",
"Antisocial behavior is a persistent problem plaguing online conversation platforms; it is both widespread BIBREF0 and potentially damaging to mental and emotional health BIBREF1, BIBREF2. The strain this phenomenon puts on community maintainers has sparked recent interest in computational approaches for assisting human moderators.",
"Prior work in this direction has largely focused on post-hoc identification of various kinds of antisocial behavior, including hate speech BIBREF3, BIBREF4, harassment BIBREF5, personal attacks BIBREF6, and general toxicity BIBREF7. The fact that these approaches only identify antisocial content after the fact limits their practicality as tools for assisting pre-emptive moderation in conversational domains.",
"Addressing this limitation requires forecasting the future derailment of a conversation based on early warning signs, giving the moderators time to potentially intervene before any harm is done (BIBREF8 BIBREF8, BIBREF9 BIBREF9, see BIBREF10 BIBREF10 for a discussion). Such a goal recognizes derailment as emerging from the development of the conversation, and belongs to the broader area of conversational forecasting, which includes future-prediction tasks such as predicting the eventual length of a conversation BIBREF11, whether a persuasion attempt will eventually succeed BIBREF12, BIBREF13, BIBREF14, whether team discussions will eventually lead to an increase in performance BIBREF15, or whether ongoing counseling conversations will eventually be perceived as helpful BIBREF16.",
"Approaching such conversational forecasting problems, however, requires overcoming several inherent modeling challenges. First, conversations are dynamic and their outcome might depend on how subsequent comments interact with each other. Consider the example in Figure FIGREF2: while no individual comment is outright offensive, a human reader can sense a tension emerging from their succession (e.g., dismissive answers to repeated questioning). Thus a forecasting model needs to capture not only the content of each individual comment, but also the relations between comments. Previous work has largely relied on hand-crafted features to capture such relations—e.g., similarity between comments BIBREF16, BIBREF12 or conversation structure BIBREF17, BIBREF18—, though neural attention architectures have also recently shown promise BIBREF19.",
"The second modeling challenge stems from the fact that conversations have an unknown horizon: they can be of varying lengths, and the to-be-forecasted event can occur at any time. So when is it a good time to make a forecast? Prior work has largely proposed two solutions, both resulting in important practical limitations. One solution is to assume (unrealistic) prior knowledge of when the to-be-forecasted event takes place and extract features up to that point BIBREF20, BIBREF8. Another compromising solution is to extract features from a fixed-length window, often at the start of the conversation BIBREF21, BIBREF15, BIBREF16, BIBREF9. Choosing a catch-all window-size is however impractical: short windows will miss information in comments they do not encompass (e.g., a window of only two comments would miss the chain of repeated questioning in comments 3 through 6 of Figure FIGREF2), while longer windows risk missing the to-be-forecasted event altogether if it occurs before the end of the window, which would prevent early detection.",
"In this work we introduce a model for forecasting conversational events that overcomes both these inherent challenges by processing comments, and their relations, as they happen (i.e., in an online fashion). Our main insight is that models with these properties already exist, albeit geared toward generation rather than prediction: recent work in context-aware dialog generation (or “chatbots”) has proposed sequential neural models that make effective use of the intra-conversational dynamics BIBREF22, BIBREF23, BIBREF24, while concomitantly being able to process the conversation as it develops (see BIBREF25 for a survey).",
"In order for these systems to perform well in the generative domain they need to be trained on massive amounts of (unlabeled) conversational data. The main difficulty in directly adapting these models to the supervised domain of conversational forecasting is the relative scarcity of labeled data: for most forecasting tasks, at most a few thousands labeled examples are available, insufficient for the notoriously data-hungry sequential neural models.",
"To overcome this difficulty, we propose to decouple the objective of learning a neural representation of conversational dynamics from the objective of predicting future events. The former can be pre-trained on large amounts of unsupervised data, similarly to how chatbots are trained. The latter can piggy-back on the resulting representation after fine-tuning it for classification using relatively small labeled data. While similar pre-train-then-fine-tune approaches have recently achieved state-of-the-art performance in a number of NLP tasks—including natural language inference, question answering, and commonsense reasoning (discussed in Section SECREF2)—to the best of our knowledge this is the first attempt at applying this paradigm to conversational forecasting.",
"To test the effectiveness of this new architecture in forecasting derailment of online conversations, we develop and distribute two new datasets. The first triples in size the highly curated `Conversations Gone Awry' dataset BIBREF9, where civil-starting Wikipedia Talk Page conversations are crowd-labeled according to whether they eventually lead to personal attacks; the second relies on in-the-wild moderation of the popular subreddit ChangeMyView, where the aim is to forecast whether a discussion will later be subject to moderator action due to “rude or hostile” behavior. In both datasets, our model outperforms existing fixed-window approaches, as well as simpler sequential baselines that cannot account for inter-comment relations. Furthermore, by virtue of its online processing of the conversation, our system can provide substantial prior notice of upcoming derailment, triggering on average 3 comments (or 3 hours) before an overtly toxic comment is posted.",
"To summarize, in this work we:",
"introduce the first model for forecasting conversational events that can capture the dynamics of a conversation as it develops;",
"build two diverse datasets (one entirely new, one extending prior work) for the task of forecasting derailment of online conversations;",
"compare the performance of our model against the current state-of-the-art, and evaluate its ability to provide early warning signs.",
"Our work is motivated by the goal of assisting human moderators of online communities by preemptively signaling at-risk conversations that might deserve their attention. However, we caution that any automated systems might encode or even amplify the biases existing in the training data BIBREF26, BIBREF27, BIBREF28, so a public-facing implementation would need to be exhaustively scrutinized for such biases BIBREF29.",
""
],
[
"",
"Antisocial behavior. Antisocial behavior online comes in many forms, including harassment BIBREF30, cyberbullying BIBREF31, and general aggression BIBREF32. Prior work has sought to understand different aspects of such behavior, including its effect on the communities where it happens BIBREF33, BIBREF34, the actors involved BIBREF35, BIBREF36, BIBREF37, BIBREF38 and connections to the outside world BIBREF39.",
"Post-hoc classification of conversations. There is a rich body of prior work on classifying the outcome of a conversation after it has concluded, or classifying conversational events after they happened. Many examples exist, but some more closely related to our present work include identifying the winner of a debate BIBREF40, BIBREF41, BIBREF42, identifying successful negotiations BIBREF21, BIBREF43, as well as detecting whether deception BIBREF44, BIBREF45, BIBREF46 or disagreement BIBREF47, BIBREF48, BIBREF49, BIBREF50, BIBREF51 has occurred.",
"Our goal is different because we wish to forecast conversational events before they happen and while the conversation is still ongoing (potentially allowing for interventions). Note that some post-hoc tasks can also be re-framed as forecasting tasks (assuming the existence of necessary labels); for instance, predicting whether an ongoing conversation will eventually spark disagreement BIBREF18, rather than detecting already-existing disagreement.",
"Conversational forecasting. As described in Section SECREF1, prior work on forecasting conversational outcomes and events has largely relied on hand-crafted features to capture aspects of conversational dynamics. Example feature sets include statistical measures based on similarity between utterances BIBREF16, sentiment imbalance BIBREF20, flow of ideas BIBREF20, increase in hostility BIBREF8, reply rate BIBREF11 and graph representations of conversations BIBREF52, BIBREF17. By contrast, we aim to automatically learn neural representations of conversational dynamics through pre-training.",
"Such hand-crafted features are typically extracted from fixed-length windows of the conversation, leaving unaddressed the problem of unknown horizon. While some work has trained multiple models for different window-lengths BIBREF8, BIBREF18, they consider these models to be independent and, as such, do not address the issue of aggregating them into a single forecast (i.e., deciding at what point to make a prediction). We implement a simple sliding windows solution as a baseline (Section SECREF5).",
"Pre-training for NLP. The use of pre-training for natural language tasks has been growing in popularity after recent breakthroughs demonstrating improved performance on a wide array of benchmark tasks BIBREF53, BIBREF54. Existing work has generally used a language modeling objective as the pre-training objective; examples include next-word prediction BIBREF55, sentence autoencoding, BIBREF56, and machine translation BIBREF57. BERT BIBREF58 introduces a variation on this in which the goal is to predict the next sentence in a document given the current sentence. Our pre-training objective is similar in spirit, but operates at a conversation level, rather than a document level. We hence view our objective as conversational modeling rather than (only) language modeling. Furthermore, while BERT's sentence prediction objective is framed as a multiple-choice task, our objective is framed as a generative task.",
""
],
[
"",
"We consider two datasets, representing related but slightly different forecasting tasks. The first dataset is an expanded version of the annotated Wikipedia conversations dataset from BIBREF9. This dataset uses carefully-controlled crowdsourced labels, strictly filtered to ensure the conversations are civil up to the moment of a personal attack. This is a useful property for the purposes of model analysis, and hence we focus on this as our primary dataset. However, we are conscious of the possibility that these strict labels may not fully capture the kind of behavior that moderators care about in practice. We therefore introduce a secondary dataset, constructed from the subreddit ChangeMyView (CMV) that does not use post-hoc annotations. Instead, the prediction task is to forecast whether the conversation will be subject to moderator action in the future.",
"Wikipedia data. BIBREF9's `Conversations Gone Awry' dataset consists of 1,270 conversations that took place between Wikipedia editors on publicly accessible talk pages. The conversations are sourced from the WikiConv dataset BIBREF59 and labeled by crowdworkers as either containing a personal attack from within (i.e., hostile behavior by one user in the conversation directed towards another) or remaining civil throughout.",
"A series of controls are implemented to prevent models from picking up on trivial correlations. To prevent models from capturing topic-specific information (e.g., political conversations are more likely to derail), each attack-containing conversation is paired with a clean conversation from the same talk page, where the talk page serves as a proxy for topic. To force models to actually capture conversational dynamics rather than detecting already-existing toxicity, human annotations are used to ensure that all comments preceding a personal attack are civil.",
"To the ends of more effective model training, we elected to expand the `Conversations Gone Awry' dataset, using the original annotation procedure. Since we found that the original data skewed towards shorter conversations, we focused this crowdsourcing run on longer conversations: ones with 4 or more comments preceding the attack. Through this additional crowdsourcing, we expand the dataset to 4,188 conversations, which we are publicly releasing as part of the Cornell Conversational Analysis Toolkit (ConvoKit).",
"We perform an 80-20-20 train/dev/test split, ensuring that paired conversations end up in the same split in order to preserve the topic control. Finally, we randomly sample another 1 million conversations from WikiConv to use for the unsupervised pre-training of the generative component.",
"Reddit CMV data. The CMV dataset is constructed from conversations collected via the Reddit API. In contrast to the Wikipedia-based dataset, we explicitly avoid the use of post-hoc annotation. Instead, we use as our label whether a conversation eventually had a comment removed by a moderator for violation of Rule 2: “Don't be rude or hostile to other users”.",
"Though the lack of post-hoc annotation limits the degree to which we can impose controls on the data (e.g., some conversations may contain toxic comments not flagged by the moderators) we do reproduce as many of the Wikipedia data's controls as we can. Namely, we replicate the topic control pairing by choosing pairs of positive and negative examples that belong to the same top-level post, following BIBREF12; and enforce that the removed comment was made by a user who was previously involved in the conversation. This process results in 6,842 conversations, to which we again apply a pair-preserving 80-20-20 split. Finally, we gather over 600,000 conversations that do not include any removed comment, for unsupervised pre-training."
],
[
"We now describe our general model for forecasting future conversational events. Our model integrates two components: (a) a generative dialog model that learns to represent conversational dynamics in an unsupervised fashion; and (b) a supervised component that fine-tunes this representation to forecast future events. Figure FIGREF13 provides an overview of the proposed architecture, henceforth CRAFT (Conversational Recurrent Architecture for ForecasTing).",
"Terminology. For modeling purposes, we treat a conversation as a sequence of $N$ comments $C = \\lbrace c_1,\\dots ,c_N\\rbrace $. Each comment, in turn, is a sequence of tokens, where the number of tokens may vary from comment to comment. For the $n$-th comment ($1 \\le n \\le N)$, we let $M_n$ denote the number of tokens. Then, a comment $c_n$ can be represented as a sequence of $M_n$ tokens: $c_n = \\lbrace w_1,\\dots ,w_{M_n}\\rbrace $.",
"Generative component. For the generative component of our model, we use a hierarchical recurrent encoder-decoder (HRED) architecture BIBREF60, a modified version of the popular sequence-to-sequence (seq2seq) architecture BIBREF61 designed to account for dependencies between consecutive inputs. BIBREF23 showed that HRED can successfully model conversational context by encoding the temporal structure of previously seen comments, making it an ideal fit for our use case. Here, we provide a high-level summary of the HRED architecture, deferring deeper technical discussion to BIBREF60 and BIBREF23.",
"An HRED dialog model consists of three components: an utterance encoder, a context encoder, and a decoder. The utterance encoder is responsible for generating semantic vector representations of comments. It consists of a recurrent neural network (RNN) that reads a comment token-by-token, and on each token $w_m$ updates a hidden state $h^{\\text{enc}}$ based on the current token and the previous hidden state:",
"where $f^{\\text{RNN}}$ is a nonlinear gating function (our implementation uses GRU BIBREF62). The final hidden state $h^{\\text{enc}}_M$ can be viewed as a vector encoding of the entire comment.",
"Running the encoder on each comment $c_n$ results in a sequence of $N$ vector encodings. A second encoder, the context encoder, is then run over this sequence:",
"Each hidden state $h^{\\text{con}}_n$ can then be viewed as an encoding of the full conversational context up to and including the $n$-th comment. To generate a response to comment $n$, the context encoding $h^{\\text{con}}_n$ is used to initialize the hidden state $h^{\\text{dec}}_{0}$ of a decoder RNN. The decoder produces a response token by token using the following recurrence:",
"where $f^{\\text{out}}$ is some function that outputs a probability distribution over words; we implement this using a simple feedforward layer. In our implementation, we further augment the decoder with attention BIBREF63, BIBREF64 over context encoder states to help capture long-term inter-comment dependencies. This generative component can be pre-trained using unlabeled conversational data.",
"Prediction component. Given a pre-trained HRED dialog model, we aim to extend the model to predict from the conversational context whether the to-be-forecasted event will occur. Our predictor consists of a multilayer perceptron (MLP) with 3 fully-connected layers, leaky ReLU activations between layers, and sigmoid activation for output. For each comment $c_n$, the predictor takes as input the context encoding $h^{\\text{con}}_n$ and forwards it through the MLP layers, resulting in an output score that is interpreted as a probability $p_{\\text{event}}(c_{n+1})$ that the to-be-forecasted event will happen (e.g., that the conversation will derail).",
"Training the predictive component starts by initializing the weights of the encoders to the values learned in pre-training. The main training loop then works as follows: for each positive sample—i.e., a conversation containing an instance of the to-be-forecasted event (e.g., derailment) at comment $c_e$—we feed the context $c_1,\\dots ,c_{e-1}$ through the encoder and classifier, and compute cross-entropy loss between the classifier output and expected output of 1. Similarly, for each negative sample—i.e., a conversation where none of the comments exhibit the to-be-forecasted event and that ends with $c_N$—we feed the context $c_1,\\dots ,c_{N-1}$ through the model and compute loss against an expected output of 0.",
"Note that the parameters of the generative component are not held fixed during this process; instead, backpropagation is allowed to go all the way through the encoder layers. This process, known as fine-tuning, reshapes the representation learned during pre-training to be more directly useful to prediction BIBREF55.",
"We implement the model and training code using PyTorch, and we are publicly releasing our implementation and the trained models together with the data as part of ConvoKit.",
""
],
[
"",
"We evaluate the performance of CRAFT in the task of forecasting conversational derailment in both the Wikipedia and CMV scenarios. To this end, for each of these datasets we pre-train the generative component on the unlabeled portion of the data and fine-tune it on the labeled training split (data size detailed in Section SECREF3).",
"In order to evaluate our sequential system against conversational-level ground truth, we need to aggregate comment level predictions. If any comment in the conversation triggers a positive prediction—i.e., $p_{\\text{event}}(c_{n+1})$ is greater than a threshold learned on the development split—then the respective conversation is predicted to derail. If this forecast is triggered in a conversation that actually derails, but before the derailment actually happens, then the conversation is counted as a true positive; otherwise it is a false positive. If no positive predictions are triggered for a conversation, but it actually derails then it counts as a false negative; if it does not derail then it is a true negative.",
"Fixed-length window baselines. We first seek to compare CRAFT to existing, fixed-length window approaches to forecasting. To this end, we implement two such baselines: Awry, which is the state-of-the-art method proposed in BIBREF9 based on pragmatic features in the first comment-reply pair, and BoW, a simple bag-of-words baseline that makes a prediction using TF-IDF weighted bag-of-words features extracted from the first comment-reply pair.",
"Online forecasting baselines. Next, we consider simpler approaches for making forecasts as the conversations happen (i.e., in an online fashion). First, we propose Cumulative BoW, a model that recomputes bag-of-words features on all comments seen thus far every time a new comment arrives. While this approach does exhibit the desired behavior of producing updated predictions for each new comment, it fails to account for relationships between comments.",
"This simple cumulative approach cannot be directly extended to models whose features are strictly based on a fixed number of comments, like Awry. An alternative is to use a sliding window: for a feature set based on a window of $W$ comments, upon each new comment we can extract features from a window containing that comment and the $W-1$ comments preceding it. We apply this to the Awry method and call this model Sliding Awry. For both these baselines, we aggregate comment-level predictions in the same way as in our main model.",
"CRAFT ablations. Finally, we consider two modified versions of the CRAFT model in order to evaluate the impact of two of its key components: (1) the pre-training step, and (2) its ability to capture inter-comment dependencies through its hierarchical memory.",
"To evaluate the impact of pre-training, we train the prediction component of CRAFT on only the labeled training data, without first pre-training the encoder layers with the unlabeled data. We find that given the relatively small size of labeled data, this baseline fails to successfully learn, and ends up performing at the level of random guessing. This result underscores the need for the pre-training step that can make use of unlabeled data.",
"To evaluate the impact of the hierarchical memory, we implement a simplified version of CRAFT where the memory size of the context encoder is zero (CRAFT $-$ CE), thus effectively acting as if the pre-training component is a vanilla seq2seq model. In other words, this model cannot capture inter-comment dependencies, and instead at each step makes a prediction based only on the utterance encoding of the latest comment.",
"Results. Table TABREF17 compares CRAFT to the baselines on the test splits (random baseline is 50%) and illustrates several key findings. First, we find that unsurprisingly, accounting for full conversational context is indeed helpful, with even the simple online baselines outperforming the fixed-window baselines. On both datasets, CRAFT outperforms all baselines (including the other online models) in terms of accuracy and F1. Furthermore, although it loses on precision (to CRAFT $-$ CE) and recall (to Cumulative BoW) individually on the Wikipedia data, CRAFT has the superior balance between the two, having both a visibly higher precision-recall curve and larger area under the curve (AUPR) than the baselines (Figure FIGREF20). This latter property is particularly useful in a practical setting, as it allows moderators to tune model performance to some desired precision without having to sacrifice as much in the way of recall (or vice versa) compared to the baselines and pre-existing solutions."
],
[
"We now examine the behavior of CRAFT in greater detail, to better understand its benefits and limitations. We specifically address the following questions: (1) How much early warning does the the model provide? (2) Does the model actually learn an order-sensitive representation of conversational context?",
"Early warning, but how early? The recent interest in forecasting antisocial behavior has been driven by a desire to provide pre-emptive, actionable warning to moderators. But does our model trigger early enough for any such practical goals?",
"For each personal attack correctly forecasted by our model, we count the number of comments elapsed between the time the model is first triggered and the attack. Figure FIGREF22 shows the distribution of these counts: on average, the model warns of an attack 3 comments before it actually happens (4 comments for CMV). To further evaluate how much time this early warning would give to the moderator, we also consider the difference in timestamps between the comment where the model first triggers and the comment containing the actual attack. Over 50% of conversations get at least 3 hours of advance warning (2 hours for CMV). Moreover, 39% of conversations get at least 12 hours of early warning before they derail.",
"Does order matter? One motivation behind the design of our model was the intuition that comments in a conversation are not independent events; rather, the order in which they appear matters (e.g., a blunt comment followed by a polite one feels intuitively different from a polite comment followed by a blunt one). By design, CRAFT has the capacity to learn an order-sensitive representation of conversational context, but how can we know that this capacity is actually used? It is conceivable that the model is simply computing an order-insensitive “bag-of-features”. Neural network models are notorious for their lack of transparency, precluding an analysis of how exactly CRAFT models conversational context. Nevertheless, through two simple exploratory experiments, we seek to show that it does not completely ignore comment order.",
"The first experiment for testing whether the model accounts for comment order is a prefix-shuffling experiment, visualized in Figure FIGREF23. For each conversation that the model predicts will derail, let $t$ denote the index of the triggering comment, i.e., the index where the model first made a derailment forecast. We then construct synthetic conversations by taking the first $t-1$ comments (henceforth referred to as the prefix) and randomizing their order. Finally, we count how often the model no longer predicts derailment at index $t$ in the synthetic conversations. If the model were ignoring comment order, its prediction should remain unchanged (as it remains for the Cumulative BoW baseline), since the actual content of the first $t$ comments has not changed (and CRAFT inference is deterministic). We instead find that in roughly one fifth of cases (12% for CMV) the model changes its prediction on the synthetic conversations. This suggests that CRAFT learns an order-sensitive representation of context, not a mere “bag-of-features”.",
"To more concretely quantify how much this order-sensitive context modeling helps with prediction, we can actively prevent the model from learning and exploiting any order-related dynamics. We achieve this through another type of shuffling experiment, where we go back even further and shuffle the comment order in the conversations used for pre-training, fine-tuning and testing. This procedure preserves the model's ability to capture signals present within the individual comments processed so far, as the utterance encoder is unaffected, but inhibits it from capturing any meaningful order-sensitive dynamics. We find that this hurts the model's performance (65% accuracy for Wikipedia, 59.5% for CMV), lowering it to a level similar to that of the version where we completely disable the context encoder.",
"Taken together, these experiments provide evidence that CRAFT uses its capacity to model conversational context in an order-sensitive fashion, and that it makes effective use of the dynamics within. An important avenue for future work would be developing more transparent models that can shed light on exactly what kinds of order-related features are being extracted and how they are used in prediction."
],
[
"In this work, we introduced a model for forecasting conversational events that processes comments as they happen and takes the full conversational context into account to make an updated prediction at each step. This model fills a void in the existing literature on conversational forecasting, simultaneously addressing the dual challenges of capturing inter-comment dynamics and dealing with an unknown horizon. We find that our model achieves state-of-the-art performance on the task of forecasting derailment in two different datasets that we release publicly. We further show that the resulting system can provide substantial prior notice of derailment, opening up the potential for preemptive interventions by human moderators BIBREF65.",
"While we have focused specifically on the task of forecasting derailment, we view this work as a step towards a more general model for real-time forecasting of other types of emergent properties of conversations. Follow-up work could adapt the CRAFT architecture to address other forecasting tasks mentioned in Section SECREF2—including those for which the outcome is extraneous to the conversation. We expect different tasks to be informed by different types of inter-comment dynamics, and further architecture extensions could add additional supervised fine-tuning in order to direct it to focus on specific dynamics that might be relevant to the task (e.g., exchange of ideas between interlocutors or stonewalling).",
"With respect to forecasting derailment, there remain open questions regarding what human moderators actually desire from an early-warning system, which would affect the design of a practical system based on this work. For instance, how early does a warning need to be in order for moderators to find it useful? What is the optimal balance between precision, recall, and false positive rate at which such a system is truly improving moderator productivity rather than wasting their time through false positives? What are the ethical implications of such a system? Follow-up work could run a user study of a prototype system with actual moderators to address these questions.",
"A practical limitation of the current analysis is that it relies on balanced datasets, while derailment is a relatively rare event for which a more restrictive trigger threshold would be appropriate. While our analysis of the precision-recall curve suggests the system is robust across multiple thresholds ($AUPR=0.7$), additional work is needed to establish whether the recall tradeoff would be acceptable in practice.",
"Finally, one major limitation of the present work is that it assigns a single label to each conversation: does it derail or not? In reality, derailment need not spell the end of a conversation; it is possible that a conversation could get back on track, suffer a repeat occurrence of antisocial behavior, or any number of other trajectories. It would be exciting to consider finer-grained forecasting of conversational trajectories, accounting for the natural—and sometimes chaotic—ebb-and-flow of human interactions.",
"",
"Acknowledgements. We thank Caleb Chiam, Liye Fu, Lillian Lee, Alexandru Niculescu-Mizil, Andrew Wang and Justine Zhang for insightful conversations (with unknown horizon), Aditya Jha for his great help with implementing and running the crowd-sourcing tasks, Thomas Davidson and Claire Liang for exploratory data annotation, as well as the anonymous reviewers for their helpful comments. This work is supported in part by the NSF CAREER award IIS-1750615 and by the NSF Grant SES-1741441."
]
],
"section_name": [
"Introduction",
"Further Related Work",
"Derailment Datasets",
"Conversational Forecasting Model",
"Forecasting Derailment",
"Analysis",
"Conclusions and Future Work"
]
} | {
"answers": [
{
"annotation_id": [
"0ce79f911dfee1e315ff07303368b0151b92be7e"
],
"answer": [
{
"evidence": [
"We consider two datasets, representing related but slightly different forecasting tasks. The first dataset is an expanded version of the annotated Wikipedia conversations dataset from BIBREF9. This dataset uses carefully-controlled crowdsourced labels, strictly filtered to ensure the conversations are civil up to the moment of a personal attack. This is a useful property for the purposes of model analysis, and hence we focus on this as our primary dataset. However, we are conscious of the possibility that these strict labels may not fully capture the kind of behavior that moderators care about in practice. We therefore introduce a secondary dataset, constructed from the subreddit ChangeMyView (CMV) that does not use post-hoc annotations. Instead, the prediction task is to forecast whether the conversation will be subject to moderator action in the future.",
"Wikipedia data. BIBREF9's `Conversations Gone Awry' dataset consists of 1,270 conversations that took place between Wikipedia editors on publicly accessible talk pages. The conversations are sourced from the WikiConv dataset BIBREF59 and labeled by crowdworkers as either containing a personal attack from within (i.e., hostile behavior by one user in the conversation directed towards another) or remaining civil throughout.",
"Reddit CMV data. The CMV dataset is constructed from conversations collected via the Reddit API. In contrast to the Wikipedia-based dataset, we explicitly avoid the use of post-hoc annotation. Instead, we use as our label whether a conversation eventually had a comment removed by a moderator for violation of Rule 2: “Don't be rude or hostile to other users”."
],
"extractive_spans": [],
"free_form_answer": "The Conversations Gone Awry dataset is labelled as either containing a personal attack from withint (i.e. hostile behavior by one user in the conversation directed towards another) or remaining civil throughout. The Reddit Change My View dataset is labelled with whether or not a coversation eventually had a comment removed by a moderator for violation of Rule 2: \"Don't be rude or hostile to others users.\"",
"highlighted_evidence": [
"The first dataset is an expanded version of the annotated Wikipedia conversations dataset from BIBREF9. This dataset uses carefully-controlled crowdsourced labels, strictly filtered to ensure the conversations are civil up to the moment of a personal attack. This is a useful property for the purposes of model analysis, and hence we focus on this as our primary dataset. However, we are conscious of the possibility that these strict labels may not fully capture the kind of behavior that moderators care about in practice. We therefore introduce a secondary dataset, constructed from the subreddit ChangeMyView (CMV) that does not use post-hoc annotations. Instead, the prediction task is to forecast whether the conversation will be subject to moderator action in the future.",
"Wikipedia data. BIBREF9's `Conversations Gone Awry' dataset consists of 1,270 conversations that took place between Wikipedia editors on publicly accessible talk pages. The conversations are sourced from the WikiConv dataset BIBREF59 and labeled by crowdworkers as either containing a personal attack from within (i.e., hostile behavior by one user in the conversation directed towards another) or remaining civil throughout.",
"Reddit CMV data. The CMV dataset is constructed from conversations collected via the Reddit API. In contrast to the Wikipedia-based dataset, we explicitly avoid the use of post-hoc annotation. Instead, we use as our label whether a conversation eventually had a comment removed by a moderator for violation of Rule 2: “Don't be rude or hostile to other users”."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
]
},
{
"annotation_id": [
"203140a5b4d44bc2702f6ea51dbeae247c9d29b2",
"74f6d0e1e032451923012d95d117b2138329e088"
],
"answer": [
{
"evidence": [
"To test the effectiveness of this new architecture in forecasting derailment of online conversations, we develop and distribute two new datasets. The first triples in size the highly curated `Conversations Gone Awry' dataset BIBREF9, where civil-starting Wikipedia Talk Page conversations are crowd-labeled according to whether they eventually lead to personal attacks; the second relies on in-the-wild moderation of the popular subreddit ChangeMyView, where the aim is to forecast whether a discussion will later be subject to moderator action due to “rude or hostile” behavior. In both datasets, our model outperforms existing fixed-window approaches, as well as simpler sequential baselines that cannot account for inter-comment relations. Furthermore, by virtue of its online processing of the conversation, our system can provide substantial prior notice of upcoming derailment, triggering on average 3 comments (or 3 hours) before an overtly toxic comment is posted."
],
"extractive_spans": [
" `Conversations Gone Awry' dataset",
"subreddit ChangeMyView"
],
"free_form_answer": "",
"highlighted_evidence": [
"To test the effectiveness of this new architecture in forecasting derailment of online conversations, we develop and distribute two new datasets. The first triples in size the highly curated `Conversations Gone Awry' dataset BIBREF9, where civil-starting Wikipedia Talk Page conversations are crowd-labeled according to whether they eventually lead to personal attacks; the second relies on in-the-wild moderation of the popular subreddit ChangeMyView, where the aim is to forecast whether a discussion will later be subject to moderator action due to “rude or hostile” behavior. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We consider two datasets, representing related but slightly different forecasting tasks. The first dataset is an expanded version of the annotated Wikipedia conversations dataset from BIBREF9. This dataset uses carefully-controlled crowdsourced labels, strictly filtered to ensure the conversations are civil up to the moment of a personal attack. This is a useful property for the purposes of model analysis, and hence we focus on this as our primary dataset. However, we are conscious of the possibility that these strict labels may not fully capture the kind of behavior that moderators care about in practice. We therefore introduce a secondary dataset, constructed from the subreddit ChangeMyView (CMV) that does not use post-hoc annotations. Instead, the prediction task is to forecast whether the conversation will be subject to moderator action in the future.",
"To test the effectiveness of this new architecture in forecasting derailment of online conversations, we develop and distribute two new datasets. The first triples in size the highly curated `Conversations Gone Awry' dataset BIBREF9, where civil-starting Wikipedia Talk Page conversations are crowd-labeled according to whether they eventually lead to personal attacks; the second relies on in-the-wild moderation of the popular subreddit ChangeMyView, where the aim is to forecast whether a discussion will later be subject to moderator action due to “rude or hostile” behavior. In both datasets, our model outperforms existing fixed-window approaches, as well as simpler sequential baselines that cannot account for inter-comment relations. Furthermore, by virtue of its online processing of the conversation, our system can provide substantial prior notice of upcoming derailment, triggering on average 3 comments (or 3 hours) before an overtly toxic comment is posted."
],
"extractive_spans": [],
"free_form_answer": "An expanded version of the existing 'Conversations Gone Awry' dataset and the ChangeMyView dataset, a subreddit whose only annotation is whether the conversation required action by the Reddit moderators. ",
"highlighted_evidence": [
"The first dataset is an expanded version of the annotated Wikipedia conversations dataset from BIBREF9. This dataset uses carefully-controlled crowdsourced labels, strictly filtered to ensure the conversations are civil up to the moment of a personal attack. This is a useful property for the purposes of model analysis, and hence we focus on this as our primary dataset. However, we are conscious of the possibility that these strict labels may not fully capture the kind of behavior that moderators care about in practice. We therefore introduce a secondary dataset, constructed from the subreddit ChangeMyView (CMV) that does not use post-hoc annotations. Instead, the prediction task is to forecast whether the conversation will be subject to moderator action in the future.",
"The first triples in size the highly curated `Conversations Gone Awry' dataset BIBREF9, where civil-starting Wikipedia Talk Page conversations are crowd-labeled according to whether they eventually lead to personal attacks; the second relies on in-the-wild moderation of the popular subreddit ChangeMyView, where the aim is to forecast whether a discussion will later be subject to moderator action due to “rude or hostile” behavior."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
]
}
],
"nlp_background": [
"zero",
"zero"
],
"paper_read": [
"no",
"no"
],
"question": [
"What labels for antisocial events are available in datasets?",
"What are two datasets model is applied to?"
],
"question_id": [
"c74185bced810449c5f438f11ed6a578d1e359b4",
"88e5d37617e14d6976cc602a168332fc23644f19"
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"search_query": [
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 2: Sketch of the CRAFT architecture.",
"Table 1: Comparison of the capabilities of each baseline and our CRAFT models (full and without the Context Encoder) with regards to capturing inter-comment (D)ynamics, processing conversations in an (O)nline fashion, and automatically (L)earning feature representations, as well as their performance in terms of (A)ccuracy, (P)recision, (R)ecall, False Positive Rate (FPR), and F1 score. Awry is the model previously proposed by Zhang et al. (2018a) for this task.",
"Figure 3: Precision-recall curves and the area under each curve. To reduce clutter, we show only the curves for Wikipedia data (CMV curves are similar) and exclude the fixed-length window baselines (which perform worse).",
"Figure 4: Distribution of number of comments elapsed between the model’s first warning and the attack.",
"Figure 5: The prefix-shuffling procedure (t = 4)."
],
"file": [
"5-Figure2-1.png",
"7-Table1-1.png",
"7-Figure3-1.png",
"8-Figure4-1.png",
"8-Figure5-1.png"
]
} | [
"What labels for antisocial events are available in datasets?",
"What are two datasets model is applied to?"
] | [
[
"1909.01362-Derailment Datasets-6",
"1909.01362-Derailment Datasets-1",
"1909.01362-Derailment Datasets-2"
],
[
"1909.01362-Derailment Datasets-1",
"1909.01362-Introduction-11"
]
] | [
"The Conversations Gone Awry dataset is labelled as either containing a personal attack from withint (i.e. hostile behavior by one user in the conversation directed towards another) or remaining civil throughout. The Reddit Change My View dataset is labelled with whether or not a coversation eventually had a comment removed by a moderator for violation of Rule 2: \"Don't be rude or hostile to others users.\"",
"An expanded version of the existing 'Conversations Gone Awry' dataset and the ChangeMyView dataset, a subreddit whose only annotation is whether the conversation required action by the Reddit moderators. "
] | 226 |
1906.01040 | A Surprising Density of Illusionable Natural Speech | Recent work on adversarial examples has demonstrated that most natural inputs can be perturbed to fool even state-of-the-art machine learning systems. But does this happen for humans as well? In this work, we investigate: what fraction of natural instances of speech can be turned into"illusions"which either alter humans' perception or result in different people having significantly different perceptions? We first consider the McGurk effect, the phenomenon by which adding a carefully chosen video clip to the audio channel affects the viewer's perception of what is said (McGurk and MacDonald, 1976). We obtain empirical estimates that a significant fraction of both words and sentences occurring in natural speech have some susceptibility to this effect. We also learn models for predicting McGurk illusionability. Finally we demonstrate that the Yanny or Laurel auditory illusion (Pressnitzer et al., 2018) is not an isolated occurrence by generating several very different new instances. We believe that the surprising density of illusionable natural speech warrants further investigation, from the perspectives of both security and cognitive science. Supplementary videos are available at: https://www.youtube.com/playlist?list=PLaX7t1K-e_fF2iaenoKznCatm0RC37B_k. | {
"paragraphs": [
[
"A growing body of work on adversarial examples has identified that for machine-learning (ML) systems that operate on high-dimensional data, for nearly every natural input there exists a small perturbation of the point that will be misclassified by the system, posing a threat to its deployment in certain critical settings BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 . More broadly, the susceptibility of ML systems to adversarial examples has prompted a re-examination of whether current ML systems are truly learning or if they are assemblages of tricks that are effective yet brittle and easily fooled BIBREF9 . Implicit in this line of reasoning is the assumption that instances of ”real\" learning, such as human cognition, yield extremely robust systems. Indeed, at least in computer vision, human perception is regarded as the gold-standard for robustness to adversarial examples.",
"Evidently, humans can be fooled by a variety of illusions, whether they be optical, auditory, or other; and there is a long line of research from the cognitive science and psychology communities investigating these BIBREF10 . In general, however, these illusions are viewed as isolated examples that do not arise frequently, and which are far from the instances encountered in everyday life.",
"In this work, we attempt to understand how susceptible humans' perceptual systems for natural speech are to carefully designed “adversarial attacks.” We investigate the density of certain classes of illusion, that is, the fraction of natural language utterances whose comprehension can be affected by the illusion. Our study centers around the McGurk effect, which is the well-studied phenomenon by which the perception of what we hear can be influenced by what we see BIBREF0 . A prototypical example is that the audio of the phoneme “baa,” accompanied by a video of someone mouthing “vaa”, can be perceived as “vaa” or “gaa” (Figure 1 ). This effect persists even when the subject is aware of the setup, though the strength of the effect varies significantly across people and languages and with factors such as age, gender, and disorders BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 , BIBREF19 .",
"A significant density of illusionable instances for humans might present similar types of security risks as adversarial examples do for ML systems. Auditory signals such as public service announcements, instructions sent to first responders, etc., could be targeted by a malicious agent. Given only access to a screen within eyesight of the intended victims, the agent might be able to significantly obfuscate or alter the message perceived by those who see the screen (even peripherally)."
],
[
"Illusionable instances for humans are similar to adversarial examples for ML systems. Strictly speaking, however, our investigation of the density of natural language for which McGurk illusions can be created, is not the human analog of adversarial examples. The adversarial examples for ML systems are datapoints that are misclassified, despite being extremely similar to a typical datapoint (that is correctly classified). Our illusions of misdubbed audio are not extremely close to any typically encountered input, since our McGurk samples have auditory signals corresponding to one phoneme/word and visual signals corresponding to another. Also, there is a compelling argument for why the McGurk confusion occurs, namely that human speech perception is bimodal (audio-visual) in nature when lip reading is available BIBREF20 , BIBREF21 . To the best of our knowledge, prior to our work, there has been little systematic investigation of the extent to which the McGurk effect, or other types of illusions, can be made dense in the set of instances encountered in everyday life. The closest work is BIBREF22 , where the authors demonstrate that some adversarial examples for computer vision systems also fool humans when humans were given less than a tenth of second to view the image. However, some of these examples seem less satisfying as the perturbation acts as a pixel-space interpolation between the original image and the “incorrect” class. This results in images that are visually borderline between two classes, and as such, do not provide a sense of illusion to the viewer. In general, researchers have not probed the robustness of human perception with the same tools, intent, or perspective, with which the security community is currently interrogating the robustness of ML systems."
],
[
"For the McGurk effect, we attempt an illusion for a language token (e.g. phoneme, word, sentence) $x$ by creating a video where an audio stream of $x$ is visually dubbed over by a person saying $x^{\\prime }\\ne x$ . We stress that the audio portion of the illusion is not modified and corresponds to a person saying $x$ . The illusion $f(x^{\\prime },x)$ affects a listener if they perceive what is being said to be $y\\ne x$ if they watched the illusory video whereas they perceive $x$ if they had either listened to the audio stream without watching the video or had watched the original unaltered video, depending on specification. We call a token illusionable if an illusion can be made for the token that affects the perception of a significant fraction of people.",
"In Section \"Phoneme-level experiments\" , we analyze the extent to which the McGurk effect can be used to create illusions for phonemes, words, and sentences, and analyze the fraction of natural language that is susceptible to such illusionability. We thereby obtain a lower bound on the density of illusionable natural speech.",
"We find that 1) a significant fraction of words that occur in everyday speech can be turned into McGurk-style illusions, 2) such illusions persist when embedded within the context of natural sentences, and in fact affect a significant fraction of natural sentences, and 3) the illusionability of words and sentences can be predicted using features from natural language modeling."
],
[
"We began by determining which phoneme sounds can be paired with video dubs of other phonemes to effect a perceived phoneme that is different from the actual sound. We created McGurk videos for all vowel pairs preceded with the consonant // as well as for all consonant pairs followed by the vowel // spoken by a speaker. There are 20 vowel phonemes and 24 consonant phonemes in American English although /ʤ/ and /ʒ/ are redundant for our purposes. Based on labels provided by 10 individuals we found that although vowels were not easily confused, there are a number of illusionable consonants. We note that the illusionable phoneme pairs depend both on the speaker and listener identities. Given Table 1 of illusionable phonemes, the goal was then to understand whether these could be leveraged within words or sentences; and if so, the fraction of natural speech that is susceptible."
],
[
"We sampled 200 unique words (listed in Table 2 ) from the 10,000 most common words in the Project Gutenberg novels in proportion to their frequency in the corpus. The 10k words collectively have a prevalence of 80.6% in the corpus. Of the 200 sampled words, 147 (73.5%) contained phonemes that our preliminary phoneme study suggested might be illusionable. For these 147 words, we paired audio clips spoken by the speaker with illusory video dubs of the speaker saying the words with appropriately switched out phonemes. We tested these videos on 20 naive test subjects who did not participate in the preliminary study. Each subject watched half of the words and listened without video to the other half of the words, and were given the instructions: \"Write down what you hear. What you hear may or may not be sensical. Also write down if a clip sounds unclear to you. Not that a clip may sound nonsensical but clear.\" Subjects were allowed up to three plays of each clip.",
"We found that watching the illusory videos led to an average miscomprehension rate of 24.8%, a relative 148% increase from the baseline of listening to the audio alone (Table 3 ). The illusory videos made people less confident about their correct answers, with an additional 5.1% of words being heard correctly but unclearly, compared to 2.1% for audio only. For 17% of the 200 words, the illusory videos increased the error rates by more than 30% above the audio-only baseline.",
"To create a predictive model for word-level illusionability, we used illusionable phonemes enriched with positional information as features. Explicitly, for each of the 10 illusionable phonemes, we created three features from the phoneme being in an initial position (being the first phoneme of the word), a medial position (having phonemes come before and after), or a final position (being the last phoneme of the word). We then represented each word with a binary bag-of-words model BIBREF23 , giving each of the 30 phonemes-with-phonetic-context features a value of 1 if present in the word and 0 otherwise. We performed ridge regression on these features with a constant term. We searched for the optimal $l2$ regularization constant among the values [0.1, 1, 10, 100] and picked the optimal one based on training set performance. The train:test split was in the proportion 85%:15% and was randomly chosen for each trial. Across 10k randomized trials, we obtain average training and test set correlations of $91.1\\pm 0.6\\%$ and $44.6\\pm 28.9\\%$ respectively.",
"Our final model achieves an out-of-sample correlation of 57% between predicted and observed illusionabilites. Here, the observed illusionability of the words is calculated as the difference between the accuracy of watchers of the illusory videos and the accuracy of listeners, where “accuracy” is defined as the fraction of respondents who were correct. For each word, the predicted illusionability is calculated from doing inference on that word using the averaged regression coefficients of the regression trials where the word is not in the training set.",
"Our predicted illusionability is also calibrated, in the sense that for the words predicted to have an illusionability <0.1, the mean empirical illusionability is 0.04; for words with predicted illusionability in the interval [0.1, 0.2] the mean empirical illusionability is 0.14; for predicted illusionability between [0.2, 0.3] the mean observed is 0.27; and for predicted illusionability >0.3, the mean observed is 0.50. Figure 2 visually depicts the match between the observed and predicted word illusionabilities."
],
[
"We set up the following experiment on naturally occurring sentences. We randomly sampled 300 sentences of lengths 4-8 words inclusive from the novel Little Women BIBREF24 from the Project Gutenberg corpus. From this reduced sample, we selected and perturbed 32 sentences that we expected to be illusionable (listed in Table 4 ). With the speaker, we prepared two formats of each sentence: original video (with original audio), and illusory video (with original audio).We then evaluated the perception of these on 1306 naive test subjects on Amazon Mechanical Turk. The Turkers were shown videos for six randomly selected sentences, three illusory and three original, and were given the prompt: \"Press any key to begin video [index #] of 6. Watch the whole video, and then you will be prompted to write down what the speaker said.\" Only one viewing of any clip was allowed, to simulate the natural setting of observing a live audio/video stream. Each Turker was limited to six videos to reduce respondent fatigue. Turkers were also asked to report their level of confidence in what they heard on a scale from no uncertainty (0%) to complete uncertainty (100%). One hundred and twelve Turkers (8.6%) did not adhere to the prompt, writing unrelated responses, and their results were omitted from analysis. We found that watching the illusory videos led to an average miscomprehension rate of 32.8%, a relative 145% increase from the baseline of listening to the audio alone (Table 5 ). The illusory videos made people less confident about their correct answers. Turkers who correctly identified the audio message in an illusory video self-reported an average uncertainty of 42.9%, which is a relative 123% higher than the average reported by the Turkers who correctly understood the original videos. Examples of mistakes made by listeners of the illusory videos are shown in Table 6 . Overall we found that for 11.5% of the 200 sampled sentences (23 out of the 30 videos we created), the illusory videos increased the error rates by more than 10% above the audio-only baseline.",
"We obtained a sentence-level illusionability prediction model with an out-of-sample correlation of 33% between predicted and observed illusionabilities. Here, the observed illusionability of the sentences was calculated as the difference between the accuracy of watchers of the illusory videos and the accuracy of watchers of original videos, where “accuracy” is defined as the fraction of respondents who were correct. We obtained predicted illusionabilities by simply using the maximum word illusionability prediction amongst the words in each sentence, with word predictions obtained from the word-level model. We attempted to improve our sentence-level predictive model by incorporating how likely the words appear under a natural language distribution, considering three classes of words: words in the sentence for which no illusion was attempted, the words for which an illusion was attempted, and the potentially perceived words for words for which an illusion was attempted. We used log word frequencies obtained from the the top 36.7k most common words from the Project Gutenberg corpus. This approach could not attain better out-of-sample correlations than the naive method. This implies that context is important for sentence-level illusionability, and more complex language models should be used.",
"Finally, comparing word-level and sentence-level McGurk illusionabilities in natural speech, we observe that that the former is significantly higher. A greater McGurk effect at the word level is to be expected–sentences provide context with which the viewer could fill in confusions and misunderstandings. Furthermore, when watching a sentence video compared to a short word video, the viewer's attention is more likely to stray, both from the visual component of the video, which evidently reduces the McGurk effect, as well as the from the audio component, which likely prompts the viewer to rely even more heavily on context. Nevertheless, there remains a significant amount of illusionability at the sentence-level."
],
[
"This work is an initial step towards exploring the density of illusionable phenomena for humans. There are many natural directions for future work. In the vein of further understanding McGurk-style illusions, it seems worth building more accurate predictive models for sentence-level effects, and further investigating the security risks posed by McGurk illusions. For example, one concrete next step in understanding McGurk-style illusions would be to actually implement a system which takes an audio input, and outputs a video dub resulting in significant misunderstanding. Such a system would need to combine a high-quality speech-to-video-synthesis system BIBREF25 , BIBREF26 , with a fleshed-out language model and McGurk prediction model. There is also the question of how to guard against “attacks” on human perception. For example, in the case of the McGurk effect, how can one rephrase a passage of text in such a way that the meaning is unchanged, but the rephrased text is significantly more robust to McGurk style manipulations? The central question in this direction is what fraction of natural language can be made robust without significantly changing the semantics.",
"A better understanding of when and why certain human perception systems are nonrobust can also be applied to make ML systems more robust. In particular, neural networks have been found to be susceptible to adversarial examples in automatic speech recognition BIBREF27 , BIBREF28 and to the McGurk effect BIBREF29 , and a rudimentary approach to making language robust to the latter problem would be to use a reduced vocabulary that avoids words that score highly in our word-level illusionability prediction model. Relatedly, at the interface of cognitive science and adversarial examples, there has been work suggesting that humans can anticipate when or how machines will misclassify, including for adversarial examples BIBREF30 , BIBREF31 , BIBREF32 .",
"More broadly, as the tools for probing the weaknesses of ML systems develop further, it seems like a natural time to reexamine the supposed robustness of human perception. We anticipate unexpected findings. To provide one example, we summarize some preliminary results on audio-only illusions."
],
[
"An audio clip of the word “Laurel\" gained widespread attention in 2018, with coverage by notable news outlets such as The New York Times and Time. Roughly half of listeners perceive “Laurel” and the other half perceive “Yanny” or similar-sounding words, with high confidence on both sides BIBREF1 . One of the reasons the public was intrigued is because examples of such phenomena are viewed as rare, isolated instances. In a preliminary attempt to investigate the density of such phenomena, we identified five additional distinct examples (Table 7 ). The supplementary files include 10 versions of one of these examples, where listeners tend to perceive either “worlds” or “yikes.” Across the different audio clips, one should be able to hear both interpretations. The threshold for switching from one interpretation to another differs from person to person.",
"These examples were generated by examining 5000 words, and selecting the 50 whose spectrograms contain a balance of high and low frequency components that most closely matched those for the word “Laurel\". Each audio file corresponded to the Google Cloud Text-to-Speech API synthesis of a word, after low frequencies were damped and the audio was slowed $1.3$ - $1.9$ x. After listening to these top 50 candidates, we evaluated the most promising five on a set of 15 individuals (3 female, 12 male, age range 22-33). We found multiple distributional modes of perceptions for all five audio clips. For example, a clip of “worlds” with the high frequencies damped and slowed down 1.5x was perceived by five listeners as “worlds”, four as “yikes/yites” and six as “nights/lights”. While these experiments do not demonstrate a density of such examples with respect to the set of all words—and it is unlikely that illusory audio tracks in this style can be created for the majority of words—they illustrate that even the surprising Yanny or Laurel phenomenon is not an isolated occurrence. It remains to be seen how dense such phenomena can be, given the right sort of subtle audio manipulation. `"
],
[
"Our work suggests that for a significant fraction of natural speech, human perception can be altered by using subtle, learnable perturbations. This is an initial step towards exploring the density of illusionable phenomenon for humans, and examining the extent to which human perception may be vulnerable to security risks like those that adversarial examples present for ML systems.",
"We hope our work inspires future investigations into the discovery, generation, and quantification of multimodal and unimodal audiovisual and auditory illusions for humans. There exist many open research questions on when and why humans are susceptible to various types of illusions, how to model the illusionability of natural language, and how natural language can be made more robust to illusory perturbations. Additionally, we hope such investigations inform our interpretations of the strengths and weaknesses of current ML systems. Finally, there is the possibility that some vulnerability to carefully crafted adversarial examples may be inherent to all complex learning systems that interact with high-dimensional inputs in an environment with limited data; any thorough investigation of this question must also probe the human cognitive system."
],
[
"This research was supported in part by NSF award AF:1813049, an ONR Young Investigator award (N00014-18-1-2295), and a grant from Stanford's Institute for Human-Centered AI. The authors would like to thank Jean Betterton, Shivam Garg, Noah Goodman, Kelvin Guu, Michelle Lee, Percy Liang, Aleksander Makelov, Jacob Plachta, Jacob Steinhardt, Jim Terry, and Alexander Whatley for useful feedback on the work. The research was done under Stanford IRB Protocol 46430."
]
],
"section_name": [
"Introduction",
"Related work",
"Problem setup",
"Phoneme-level experiments",
"Word-level experiments",
"Sentence-level experiments",
"Future Directions",
"Auditory Illusions",
"Conclusion",
"Acknowledgements"
]
} | {
"answers": [
{
"annotation_id": [
"9f1fc68aa78e787efa94f87063461278c9793926",
"da308152b2af8713bd201cf0fbc0823458501c7b"
],
"answer": [
{
"evidence": [
"For the McGurk effect, we attempt an illusion for a language token (e.g. phoneme, word, sentence) $x$ by creating a video where an audio stream of $x$ is visually dubbed over by a person saying $x^{\\prime }\\ne x$ . We stress that the audio portion of the illusion is not modified and corresponds to a person saying $x$ . The illusion $f(x^{\\prime },x)$ affects a listener if they perceive what is being said to be $y\\ne x$ if they watched the illusory video whereas they perceive $x$ if they had either listened to the audio stream without watching the video or had watched the original unaltered video, depending on specification. We call a token illusionable if an illusion can be made for the token that affects the perception of a significant fraction of people.",
"In this work, we attempt to understand how susceptible humans' perceptual systems for natural speech are to carefully designed “adversarial attacks.” We investigate the density of certain classes of illusion, that is, the fraction of natural language utterances whose comprehension can be affected by the illusion. Our study centers around the McGurk effect, which is the well-studied phenomenon by which the perception of what we hear can be influenced by what we see BIBREF0 . A prototypical example is that the audio of the phoneme “baa,” accompanied by a video of someone mouthing “vaa”, can be perceived as “vaa” or “gaa” (Figure 1 ). This effect persists even when the subject is aware of the setup, though the strength of the effect varies significantly across people and languages and with factors such as age, gender, and disorders BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 , BIBREF19 ."
],
"extractive_spans": [],
"free_form_answer": "a perceptual illusion, where listening to a speech sound while watching a mouth pronounce a different sound changes how the audio is heard",
"highlighted_evidence": [
"For the McGurk effect, we attempt an illusion for a language token (e.g. phoneme, word, sentence) $x$ by creating a video where an audio stream of $x$ is visually dubbed over by a person saying $x^{\\prime }\\ne x$ .",
"The illusion $f(x^{\\prime },x)$ affects a listener if they perceive what is being said to be $y\\ne x$ if they watched the illusory video whereas they perceive $x$ if they had either listened to the audio stream without watching the video or had watched the original unaltered video, depending on specification. ",
"A prototypical example is that the audio of the phoneme “baa,” accompanied by a video of someone mouthing “vaa”, can be perceived as “vaa” or “gaa” (Figure 1 )."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In this work, we attempt to understand how susceptible humans' perceptual systems for natural speech are to carefully designed “adversarial attacks.” We investigate the density of certain classes of illusion, that is, the fraction of natural language utterances whose comprehension can be affected by the illusion. Our study centers around the McGurk effect, which is the well-studied phenomenon by which the perception of what we hear can be influenced by what we see BIBREF0 . A prototypical example is that the audio of the phoneme “baa,” accompanied by a video of someone mouthing “vaa”, can be perceived as “vaa” or “gaa” (Figure 1 ). This effect persists even when the subject is aware of the setup, though the strength of the effect varies significantly across people and languages and with factors such as age, gender, and disorders BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 , BIBREF19 ."
],
"extractive_spans": [],
"free_form_answer": "When the perception of what we hear is influenced by what we see.",
"highlighted_evidence": [
"In this work, we attempt to understand how susceptible humans' perceptual systems for natural speech are to carefully designed “adversarial attacks.” We investigate the density of certain classes of illusion, that is, the fraction of natural language utterances whose comprehension can be affected by the illusion. Our study centers around the McGurk effect, which is the well-studied phenomenon by which the perception of what we hear can be influenced by what we see BIBREF0 . A prototypical example is that the audio of the phoneme “baa,” accompanied by a video of someone mouthing “vaa”, can be perceived as “vaa” or “gaa” (Figure 1 )."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"043654eefd60242ac8da08ddc1d4b8d73f86f653",
"c7d4a630661cd719ea504dba56393f78278b296b"
]
},
{
"annotation_id": [
"105b12239011722b869c8ba3c1e2fe74b157a364"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"c7d4a630661cd719ea504dba56393f78278b296b"
]
}
],
"nlp_background": [
"two",
"two"
],
"paper_read": [
"no",
"no"
],
"question": [
"What is the McGurk effect?",
"Are humans and machine learning systems fooled by the same kinds of illusions?"
],
"question_id": [
"70148c8d0f345ea36200d5ba19d021924d98e759",
"27cf16bc9ef71761b9df6217f00f39f21130ce15"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: Illustration of the McGurk effect. For some phoneme pairs, when the speaker visibly mouths phoneme A but the auditory stimulus is actually the phoneme B, listeners tend to perceive a phoneme C 6= B.",
"Table 1: Illusionable phonemes and effects based on preliminary phoneme-pair testing. Where a number of lip movements were available to affect a phoneme, the most effective one is listed.",
"Table 2: The 200 unique words sampled from the Project Gutenberg novel corpus. The 147 of those for which an illusory video was created are listed on top. Ordering is otherwise alphabetical.",
"Table 3: Test results for word-level McGurk illusions among the 147 words predicted to be illusionable. Shown are average error rates for watching the illusory video vs listening to the audio only, as well as the percentage of words that are correctly identified but sound ambiguous to the listener.",
"Figure 2: Predicted word illusionability closely matches observed word illusionability, with out-ofsample correlation of 57%. The words are sorted by increasing predicted word illusionability (and the observed illusionability of each word was not used in calculating the prediction of that word).",
"Table 4: The sentences randomly sampled from Little Women for which we made illusory videos. Those that had observed illusionability >0.1 are listed on top.",
"Table 5: Test results for sentence-level McGurk illusions, showing average error rates for watching the illusory video vs watching the original video, and average self-reported uncertainty for correctly identified words.",
"Table 6: Sample verbatim mistakes for sentence-level McGurk illusions. Spelling mistakes were ignored for accuracy calculations. Sample illusory videos are provided in the supplementary files.",
"Table 7: Test results for Yanny/Laurel style illusions. Each row displays a cluster of similar-sounding reported words, with counts. Emdashes indicate unintelligible."
],
"file": [
"2-Figure1-1.png",
"3-Table1-1.png",
"4-Table2-1.png",
"4-Table3-1.png",
"5-Figure2-1.png",
"6-Table4-1.png",
"6-Table5-1.png",
"6-Table6-1.png",
"8-Table7-1.png"
]
} | [
"What is the McGurk effect?"
] | [
[
"1906.01040-Introduction-2",
"1906.01040-Problem setup-0"
]
] | [
"When the perception of what we hear is influenced by what we see."
] | 229 |
1909.01383 | Context-Aware Monolingual Repair for Neural Machine Translation | Modern sentence-level NMT systems often produce plausible translations of isolated sentences. However, when put in context, these translations may end up being inconsistent with each other. We propose a monolingual DocRepair model to correct inconsistencies between sentence-level translations. DocRepair performs automatic post-editing on a sequence of sentence-level translations, refining translations of sentences in context of each other. For training, the DocRepair model requires only monolingual document-level data in the target language. It is trained as a monolingual sequence-to-sequence model that maps inconsistent groups of sentences into consistent ones. The consistent groups come from the original training data; the inconsistent groups are obtained by sampling round-trip translations for each isolated sentence. We show that this approach successfully imitates inconsistencies we aim to fix: using contrastive evaluation, we show large improvements in the translation of several contextual phenomena in an English-Russian translation task, as well as improvements in the BLEU score. We also conduct a human evaluation and show a strong preference of the annotators to corrected translations over the baseline ones. Moreover, we analyze which discourse phenomena are hard to capture using monolingual data only. | {
"paragraphs": [
[
"Machine translation has made remarkable progress, and studies claiming it to reach a human parity are starting to appear BIBREF0. However, when evaluating translations of the whole documents rather than isolated sentences, human raters show a stronger preference for human over machine translation BIBREF1. These findings emphasize the need to shift towards context-aware machine translation both from modeling and evaluation perspective.",
"Most previous work on context-aware NMT assumed that either all the bilingual data is available at the document level BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10 or at least its fraction BIBREF11. But in practical scenarios, document-level parallel data is often scarce, which is one of the challenges when building a context-aware system.",
"We introduce an approach to context-aware machine translation using only monolingual document-level data. In our setting, a separate monolingual sequence-to-sequence model (DocRepair) is used to correct sentence-level translations of adjacent sentences. The key idea is to use monolingual data to imitate typical inconsistencies between context-agnostic translations of isolated sentences. The DocRepair model is trained to map inconsistent groups of sentences into consistent ones. The consistent groups come from the original training data; the inconsistent groups are obtained by sampling round-trip translations for each isolated sentence.",
"To validate the performance of our model, we use three kinds of evaluation: the BLEU score, contrastive evaluation of translation of several discourse phenomena BIBREF11, and human evaluation. We show strong improvements for all metrics.",
"We analyze which discourse phenomena are hard to capture using monolingual data only. Using contrastive test sets for targeted evaluation of several contextual phenomena, we compare the performance of the models trained on round-trip translations and genuine document-level parallel data. Among the four phenomena in the test sets we use (deixis, lexical cohesion, VP ellipsis and ellipsis which affects NP inflection) we find VP ellipsis to be the hardest phenomenon to be captured using round-trip translations.",
"Our key contributions are as follows:",
"we introduce the first approach to context-aware machine translation using only monolingual document-level data;",
"our approach shows substantial improvements in translation quality as measured by BLEU, targeted contrastive evaluation of several discourse phenomena and human evaluation;",
"we show which discourse phenomena are hard to capture using monolingual data only."
],
[
"We propose a monolingual DocRepair model to correct inconsistencies between sentence-level translations of a context-agnostic MT system. It does not use any states of a trained MT model whose outputs it corrects and therefore can in principle be trained to correct translations from any black-box MT system.",
"The DocRepair model requires only monolingual document-level data in the target language. It is a monolingual sequence-to-sequence model that maps inconsistent groups of sentences into consistent ones. Consistent groups come from monolingual document-level data. To obtain inconsistent groups, each sentence in a group is replaced with its round-trip translation produced in isolation from context. More formally, forming a training minibatch for the DocRepair model involves the following steps (see also Figure FIGREF9):",
"sample several groups of sentences from the monolingual data;",
"for each sentence in a group, (i) translate it using a target-to-source MT model, (ii) sample a translation of this back-translated sentence in the source language using a source-to-target MT model;",
"using these round-trip translations of isolated sentences, form an inconsistent version of the initial groups;",
"use inconsistent groups as input for the DocRepair model, consistent ones as output.",
"At test time, the process of getting document-level translations is two-step (Figure FIGREF10):",
"produce translations of isolated sentences using a context-agnostic MT model;",
"apply the DocRepair model to a sequence of context-agnostic translations to correct inconsistencies between translations.",
"In the scope of the current work, the DocRepair model is the standard sequence-to-sequence Transformer. Sentences in a group are concatenated using a reserved token-separator between sentences. The Transformer is trained to correct these long inconsistent pseudo-sentences into consistent ones. The token-separator is then removed from corrected translations."
],
[
"We use contrastive test sets for evaluation of discourse phenomena for English-Russian by BIBREF11. These test sets allow for testing different kinds of phenomena which, as we show, can be captured from monolingual data with varying success. In this section, we provide test sets statistics and briefly describe the tested phenomena. For more details, the reader is referred to BIBREF11."
],
[
"There are four test sets in the suite. Each test set contains contrastive examples. It is specifically designed to test the ability of a system to adapt to contextual information and handle the phenomenon under consideration. Each test instance consists of a true example (a sequence of sentences and their reference translation from the data) and several contrastive translations which differ from the true one only in one specific aspect. All contrastive translations are correct and plausible translations at the sentence level, and only context reveals the inconsistencies between them. The system is asked to score each candidate translation, and we compute the system accuracy as the proportion of times the true translation is preferred to the contrastive ones. Test set statistics are shown in Table TABREF15. The suites for deixis and lexical cohesion are split into development and test sets, with 500 examples from each used for validation purposes and the rest for testing. Convergence of both consistency scores on these development sets and BLEU score on a general development set are used as early stopping criteria in models training. For ellipsis, there is no dedicated development set, so we evaluate on all the ellipsis data and do not use it for development."
],
[
"Deixis Deictic words or phrases, are referential expressions whose denotation depends on context. This includes personal deixis (“I”, “you”), place deixis (“here”, “there”), and discourse deixis, where parts of the discourse are referenced (“that's a good question”). The test set examples are all related to person deixis, specifically the T-V distinction between informal and formal you (Latin “tu” and “vos”) in the Russian translations, and test for consistency in this respect.",
"Ellipsis Ellipsis is the omission from a clause of one or more words that are nevertheless understood in the context of the remaining elements. In machine translation, elliptical constructions in the source language pose a problem in two situations. First, if the target language does not allow the same types of ellipsis, requiring the elided material to be predicted from context. Second, if the elided material affects the syntax of the sentence. For example, in Russian the grammatical function of a noun phrase, and thus its inflection, may depend on the elided verb, or, conversely, the verb inflection may depend on the elided subject.",
"There are two different test sets for ellipsis. One contains examples where a morphological form of a noun group in the last sentence can not be understood without context beyond the sentence level (“ellipsis (infl.)” in Table TABREF15). Another includes cases of verb phrase ellipsis in English, which does not exist in Russian, thus requires predicting the verb when translating into Russian (“ellipsis (VP)” in Table TABREF15).",
"Lexical cohesion The test set focuses on reiteration of named entities. Where several translations of a named entity are possible, a model has to prefer consistent translations over inconsistent ones."
],
[
"We use the publicly available OpenSubtitles2018 corpus BIBREF12 for English and Russian. For a fair comparison with previous work, we train the baseline MT system on the data released by BIBREF11. Namely, our MT system is trained on 6m instances. These are sentence pairs with a relative time overlap of subtitle frames between source and target language subtitles of at least $0.9$.",
"We gathered 30m groups of 4 consecutive sentences as our monolingual data. We used only documents not containing groups of sentences from general development and test sets as well as from contrastive test sets. The main results we report are for the model trained on all 30m fragments.",
"We use the tokenization provided by the corpus and use multi-bleu.perl on lowercased data to compute BLEU score. We use beam search with a beam of 4.",
"Sentences were encoded using byte-pair encoding BIBREF13, with source and target vocabularies of about 32000 tokens. Translation pairs were batched together by approximate sequence length. Each training batch contained a set of translation pairs containing approximately 15000 source tokens. It has been shown that Transformer's performance depends heavily on batch size BIBREF14, and we chose a large batch size to ensure the best performance. In training context-aware models, for early stopping we use both convergence in BLEU score on the general development set and scores on the consistency development sets. After training, we average the 5 latest checkpoints."
],
[
"The baseline model, the model used for back-translation, and the DocRepair model are all Transformer base models BIBREF15. More precisely, the number of layers is $N=6$ with $h = 8$ parallel attention layers, or heads. The dimensionality of input and output is $d_{model} = 512$, and the inner-layer of a feed-forward networks has dimensionality $d_{ff}=2048$. We use regularization as described in BIBREF15.",
"As a second baseline, we use the two-pass CADec model BIBREF11. The first pass produces sentence-level translations. The second pass takes both the first-pass translation and representations of the context sentences as input and returns contextualized translations. CADec requires document-level parallel training data, while DocRepair only needs monolingual training data."
],
[
"On the selected 6m instances we train sentence-level translation models in both directions. To create training data for DocRepair, we proceed as follows. The Russian monolingual data is first translated into English, using the Russian$\\rightarrow $English model and beam search with beam size of 4. Then, we use the English$\\rightarrow $Russian model to sample translations with temperature of $0{.}5$. For each sentence, we precompute 20 sampled translations and randomly choose one of them when forming a training minibatch for DocRepair. Also, in training, we replace each token in the input with a random one with the probability of $10\\%$."
],
[
"As in BIBREF15, we use the Adam optimizer BIBREF16, the parameters are $\\beta _1 = 0{.}9$, $\\beta _2 = 0{.}98$ and $\\varepsilon = 10^{-9}$. We vary the learning rate over the course of training using the formula:",
"where $warmup\\_steps = 16000$ and $scale=4$."
],
[
"The BLEU scores are provided in Table TABREF24 (we evaluate translations of 4-sentence fragments). To see which part of the improvement is due to fixing agreement between sentences rather than simply sentence-level post-editing, we train the same repair model at the sentence level. Each sentence in a group is now corrected separately, then they are put back together in a group. One can see that most of the improvement comes from accounting for extra-sentential dependencies. DocRepair outperforms the baseline and CADec by 0.7 BLEU, and its sentence-level repair version by 0.5 BLEU."
],
[
"Scores on the phenomena test sets are provided in Table TABREF26. For deixis, lexical cohesion and ellipsis (infl.) we see substantial improvements over both the baseline and CADec. The largest improvement over CADec (22.5 percentage points) is for lexical cohesion. However, there is a drop of almost 5 percentage points for VP ellipsis. We hypothesize that this is because it is hard to learn to correct inconsistencies in translations caused by VP ellipsis relying on monolingual data alone. Figure FIGREF27(a) shows an example of inconsistency caused by VP ellipsis in English. There is no VP ellipsis in Russian, and when translating auxiliary “did” the model has to guess the main verb. Figure FIGREF27(b) shows steps of generating round-trip translations for the target side of the previous example. When translating from Russian, main verbs are unlikely to be translated as the auxiliary “do” in English, and hence the VP ellipsis is rarely present on the English side. This implies the model trained using the round-trip translations will not be exposed to many VP ellipsis examples in training. We discuss this further in Section SECREF34.",
"Table TABREF28 provides scores for deixis and lexical cohesion separately for different distances between sentences requiring consistency. It can be seen, that the performance of DocRepair degrades less than that of CADec when the distance between sentences requiring consistency gets larger."
],
[
"We conduct a human evaluation on random 700 examples from our general test set. We picked only examples where a DocRepair translation is not a full copy of the baseline one.",
"The annotators were provided an original group of sentences in English and two translations: baseline context-agnostic one and the one corrected by the DocRepair model. Translations were presented in random order with no indication which model they came from. The task is to pick one of the three options: (1) the first translation is better, (2) the second translation is better, (3) the translations are of equal quality. The annotators were asked to avoid the third answer if they are able to give preference to one of the translations. No other guidelines were given.",
"The results are provided in Table TABREF30. In about $52\\%$ of the cases annotators marked translations as having equal quality. Among the cases where one of the translations was marked better than the other, the DocRepair translation was marked better in $73\\%$ of the cases. This shows a strong preference of the annotators for corrected translations over the baseline ones."
],
[
"In this section, we discuss the influence of the training data chosen for document-level models. In all experiments, we used the DocRepair model."
],
[
"Table TABREF33 provides BLEU and consistency scores for the DocRepair model trained on different amount of data. We see that even when using a dataset of moderate size (e.g., 5m fragments) we can achieve performance comparable to the model trained on a large amount of data (30m fragments). Moreover, we notice that deixis scores are less sensitive to the amount of training data than lexical cohesion and ellipsis scores. The reason might be that, as we observed in our previous work BIBREF11, inconsistencies in translations due to the presence of deictic words and phrases are more frequent in this dataset than other types of inconsistencies. Also, as we show in Section SECREF7, this is the phenomenon the model learns faster in training."
],
[
"In this section, we discuss the limitations of using only monolingual data to model inconsistencies between sentence-level translations. In Section SECREF25 we observed a drop in performance on VP ellipsis for DocRepair compared to CADec, which was trained on parallel data. We hypothesized that this is due to the differences between one-way and round-trip translations, and now we test this hypothesis. To do so, we fix the dataset and vary the way in which the input for DocRepair is generated: round-trip or one-way translations. The latter assumes that document-level data is parallel, and translations are sampled from the source side of the sentences in a group rather than from their back-translations. For parallel data, we take 1.5m parallel instances which were used for CADec training and add 1m instances from our monolingual data. For segments in the parallel part, we either sample translations from the source side or use round-trip translations. The results are provided in Table TABREF35.",
"The model trained on one-way translations is slightly better than the one trained on round-trip translations. As expected, VP ellipsis is the hardest phenomena to be captured using round-trip translations, and the DocRepair model trained on one-way translated data gains 6% accuracy on this test set. This shows that the DocRepair model benefits from having access to non-synthetic English data. This results in exposing DocRepair at training time to Russian translations which suffer from the same inconsistencies as the ones it will have to correct at test time."
],
[
"Note that the scores of the DocRepair model trained on 2.5m instances randomly chosen from monolingual data (Table TABREF33) are different from the ones for the model trained on 2.5m instances combined from parallel and monolingual data (Table TABREF35). For convenience, we show these two in Table TABREF36.",
"The domain, the dataset these two data samples were gathered from, and the way we generated training data for DocRepair (round-trip translations) are all the same. The only difference lies in how the data was filtered. For parallel data, as in the previous work BIBREF6, we picked only sentence pairs with large relative time overlap of subtitle frames between source-language and target-language subtitles. This is necessary to ensure the quality of translation data: one needs groups of consecutive sentences in the target language where every sentence has a reliable translation.",
"Table TABREF36 shows that the quality of the model trained on data which came from the parallel part is worse than the one trained on monolingual data. This indicates that requiring each sentence in a group to have a reliable translation changes the distribution of the data, which might be not beneficial for translation quality and provides extra motivation for using monolingual data."
],
[
"Let us now look into how the process of DocRepair training progresses. Figure FIGREF38 shows how the BLEU scores with the reference translation and with the baseline context-agnostic translation (i.e. the input for the DocRepair model) are changing during training. First, the model quickly learns to copy baseline translations: the BLEU score with the baseline is very high. Then it gradually learns to change them, which leads to an improvement in BLEU with the reference translation and a drop in BLEU with the baseline. Importantly, the model is reluctant to make changes: the BLEU score between translations of the converged model and the baseline is 82.5. We count the number of changed sentences in every 4-sentence fragment in the test set and plot the histogram in Figure FIGREF38. In over than 20$\\%$ of the cases the model has not changed base translations at all. In almost $40\\%$, it modified only one sentence and left the remaining 3 sentences unchanged. The model changed more than half sentences in a group in only $14\\%$ of the cases. Several examples of the DocRepair translations are shown in Figure FIGREF43.",
"Figure FIGREF42 shows how consistency scores are changing in training. For deixis, the model achieves the final quality quite quickly; for the rest, it needs a large number of training steps to converge."
],
[
"Our work is most closely related to two lines of research: automatic post-editing (APE) and document-level machine translation."
],
[
"Our model can be regarded as an automatic post-editing system – a system designed to fix systematic MT errors that is decoupled from the main MT system. Automatic post-editing has a long history, including rule-based BIBREF17, statistical BIBREF18 and neural approaches BIBREF19, BIBREF20, BIBREF21.",
"In terms of architectures, modern approaches use neural sequence-to-sequence models, either multi-source architectures that consider both the original source and the baseline translation BIBREF19, BIBREF20, or monolingual repair systems, as in BIBREF21, which is concurrent work to ours. True post-editing datasets are typically small and expensive to create BIBREF22, hence synthetic training data has been created that uses original monolingual data as output for the sequence-to-sequence model, paired with an automatic back-translation BIBREF23 and/or round-trip translation as its input(s) BIBREF19, BIBREF21.",
"While previous work on automatic post-editing operated on the sentence level, the main novelty of this work is that our DocRepair model operates on groups of sentences and is thus able to fix consistency errors caused by the context-agnostic baseline MT system. We consider this strategy of sentence-level baseline translation and context-aware monolingual repair attractive when parallel document-level data is scarce.",
"For training, the DocRepair model only requires monolingual document-level data. While we create synthetic training data via round-trip translation similarly to earlier work BIBREF19, BIBREF21, note that we purposefully use sentence-level MT systems for this to create the types of consistency errors that we aim to fix with the context-aware DocRepair model. Not all types of consistency errors that we want to fix emerge from a round-trip translation, so access to parallel document-level data can be useful (Section SECREF34)."
],
[
"Neural models of MT that go beyond the sentence-level are an active research area BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF10, BIBREF9, BIBREF11. Typically, the main MT system is modified to take additional context as its input. One limitation of these approaches is that they assume that parallel document-level training data is available.",
"Closest to our work are two-pass models for document-level NMT BIBREF24, BIBREF11, where a second, context-aware model takes the translation and hidden representations of the sentence-level first-pass model as its input. The second-pass model can in principle be trained on a subset of the parallel training data BIBREF11, somewhat relaxing the assumption that all training data is at the document level.",
"Our work is different from this previous work in two main respects. Firstly, we show that consistency can be improved with only monolingual document-level training data. Secondly, the DocRepair model is decoupled from the first-pass MT system, which improves its portability."
],
[
"We introduce the first approach to context-aware machine translation using only monolingual document-level data. We propose a monolingual DocRepair model to correct inconsistencies between sentence-level translations. The model performs automatic post-editing on a sequence of sentence-level translations, refining translations of sentences in context of each other. Our approach results in substantial improvements in translation quality as measured by BLEU, targeted contrastive evaluation of several discourse phenomena and human evaluation. Moreover, we perform error analysis and detect which discourse phenomena are hard to capture using only monolingual document-level data. While in the current work we used text fragments of 4 sentences, in future work we would like to consider longer contexts."
],
[
"We would like to thank the anonymous reviewers for their comments. The authors also thank David Talbot and Yandex Machine Translation team for helpful discussions and inspiration. Ivan Titov acknowledges support of the European Research Council (ERC StG BroadSem 678254) and the Dutch National Science Foundation (NWO VIDI 639.022.518). Rico Sennrich acknowledges support from the Swiss National Science Foundation (105212_169888), the European Union’s Horizon 2020 research and innovation programme (grant agreement no 825460), and the Royal Society (NAF\\R1\\180122)."
]
],
"section_name": [
"Introduction",
"Our Approach: Document-level Repair",
"Evaluation of Contextual Phenomena",
"Evaluation of Contextual Phenomena ::: Test sets",
"Evaluation of Contextual Phenomena ::: Phenomena overview",
"Experimental Setup ::: Data preprocessing",
"Experimental Setup ::: Models",
"Experimental Setup ::: Generating round-trip translations",
"Experimental Setup ::: Optimizer",
"Results ::: General results",
"Results ::: Consistency results",
"Results ::: Human evaluation",
"Varying Training Data",
"Varying Training Data ::: The amount of training data",
"Varying Training Data ::: One-way vs round-trip translations",
"Varying Training Data ::: Filtering: monolingual (no filtering) or parallel",
"Learning Dynamics",
"Related Work",
"Related Work ::: Automatic post-editing",
"Related Work ::: Document-level NMT",
"Conclusions",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"45de9d4b7b9c206ecfe1de1d4a17829202882778",
"f3eb40d885804d2cfc5f8e349aa32bc2d2f36956"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"11694d2341197fd0991057d1b21ae6e6d8afe48e",
"e1ea3780986cd887b377177afc03d60195e8c6f4"
],
"answer": [
{
"evidence": [
"We use the publicly available OpenSubtitles2018 corpus BIBREF12 for English and Russian. For a fair comparison with previous work, we train the baseline MT system on the data released by BIBREF11. Namely, our MT system is trained on 6m instances. These are sentence pairs with a relative time overlap of subtitle frames between source and target language subtitles of at least $0.9$."
],
"extractive_spans": [
" MT system on the data released by BIBREF11"
],
"free_form_answer": "",
"highlighted_evidence": [
" For a fair comparison with previous work, we train the baseline MT system on the data released by BIBREF11. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The baseline model, the model used for back-translation, and the DocRepair model are all Transformer base models BIBREF15. More precisely, the number of layers is $N=6$ with $h = 8$ parallel attention layers, or heads. The dimensionality of input and output is $d_{model} = 512$, and the inner-layer of a feed-forward networks has dimensionality $d_{ff}=2048$. We use regularization as described in BIBREF15.",
"As a second baseline, we use the two-pass CADec model BIBREF11. The first pass produces sentence-level translations. The second pass takes both the first-pass translation and representations of the context sentences as input and returns contextualized translations. CADec requires document-level parallel training data, while DocRepair only needs monolingual training data."
],
"extractive_spans": [
"Transformer base",
"two-pass CADec model"
],
"free_form_answer": "",
"highlighted_evidence": [
"The baseline model, the model used for back-translation, and the DocRepair model are all Transformer base models BIBREF15.",
"As a second baseline, we use the two-pass CADec model BIBREF11. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"894fb4a712ec964185f9c0b322c4e8c44eae7dc9"
],
"answer": [
{
"evidence": [
"We analyze which discourse phenomena are hard to capture using monolingual data only. Using contrastive test sets for targeted evaluation of several contextual phenomena, we compare the performance of the models trained on round-trip translations and genuine document-level parallel data. Among the four phenomena in the test sets we use (deixis, lexical cohesion, VP ellipsis and ellipsis which affects NP inflection) we find VP ellipsis to be the hardest phenomenon to be captured using round-trip translations."
],
"extractive_spans": [],
"free_form_answer": "Four discourse phenomena - deixis, lexical cohesion, VP ellipsis, and ellipsis which affects NP inflection.",
"highlighted_evidence": [
"We analyze which discourse phenomena are hard to capture using monolingual data only.",
"Among the four phenomena in the test sets we use (deixis, lexical cohesion, VP ellipsis and ellipsis which affects NP inflection) we find VP ellipsis to be the hardest phenomenon to be captured using round-trip translations."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"e8b9857d9dab1e2f13521998e98fd1d649fd6f70"
],
"answer": [
{
"evidence": [
"The BLEU scores are provided in Table TABREF24 (we evaluate translations of 4-sentence fragments). To see which part of the improvement is due to fixing agreement between sentences rather than simply sentence-level post-editing, we train the same repair model at the sentence level. Each sentence in a group is now corrected separately, then they are put back together in a group. One can see that most of the improvement comes from accounting for extra-sentential dependencies. DocRepair outperforms the baseline and CADec by 0.7 BLEU, and its sentence-level repair version by 0.5 BLEU.",
"FLOAT SELECTED: Table 2: BLEU scores. For CADec, the original implementation was used."
],
"extractive_spans": [],
"free_form_answer": "On average 0.64 ",
"highlighted_evidence": [
"The BLEU scores are provided in Table TABREF24 (we evaluate translations of 4-sentence fragments).",
"FLOAT SELECTED: Table 2: BLEU scores. For CADec, the original implementation was used."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
}
],
"nlp_background": [
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
""
],
"question": [
"how many humans evaluated the results?",
"what was the baseline?",
"what phenomena do they mention is hard to capture?",
"by how much did the BLEU score improve?"
],
"question_id": [
"627b8d7b5b985394428c974aca5ba0c1bbbba377",
"126ff22bfcc14a2f7e1a06a91ba7b646003e9cf0",
"7e4ef0a4debc048b244b61b4f7dc2518b5b466c0",
"b68f72aed961d5ba152e9dc50345e1e832196a76"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
""
]
} | {
"caption": [
"Figure 1: Training procedure of DocRepair. First, round-trip translations of individual sentences are produced to form an inconsistent text fragment (in the example, both genders of the speaker and the cat became inconsistent). Then, a repair model is trained to produce an original text from the inconsistent one.",
"Figure 2: The process of producing document-level translations at test time is two-step: (1) sentences are translated independently using a sentence-level model, (2) DocRepair model corrects translation of the resulting text fragment.",
"Table 1: Size of test sets: total number of test instances and with regard to the distance between sentences requiring consistency (in the number of sentences). For ellipsis, the two sets correspond to whether a model has to predict correct NP inflection, or correct verb sense (VP ellipsis).",
"Table 2: BLEU scores. For CADec, the original implementation was used.",
"Table 3: Results on contrastive test sets for specific contextual phenomena (deixis, lexical consistency, ellipsis (inflection), and VP ellipsis).",
"Table 4: Detailed accuracy on deixis and lexical cohesion test sets.",
"Table 5: Human evaluation results, comparing DocRepair with baseline.",
"Table 6: Results for DocRepair trained on different amount of data. For ellipsis, we show inflection/VP scores.",
"Table 7: Consistency scores for the DocRepair model trained on 2.5m instances, among which 1.5m are parallel instances. Compare round-trip and one-way translations of the parallel part.",
"Table 8: DocRepair trained on 2.5m instances, either randomly chosen from monolingual data or from the part where each utterance in a group has a translation.",
"Figure 4: (a) BLEU scores progression in training. BLEU evaluated with the target translations and with the context-agnostic baseline translations (which DocRepair learns to correct). (b) Distribution in the test set of the number of changed sentences in 4-sentence fragments.",
"Figure 5: Consistency scores progression in training."
],
"file": [
"2-Figure1-1.png",
"2-Figure2-1.png",
"3-Table1-1.png",
"4-Table2-1.png",
"5-Table3-1.png",
"5-Table4-1.png",
"5-Table5-1.png",
"6-Table6-1.png",
"6-Table7-1.png",
"6-Table8-1.png",
"7-Figure4-1.png",
"7-Figure5-1.png"
]
} | [
"what phenomena do they mention is hard to capture?",
"by how much did the BLEU score improve?"
] | [
[
"1909.01383-Introduction-4"
],
[
"1909.01383-4-Table2-1.png",
"1909.01383-Results ::: General results-0"
]
] | [
"Four discourse phenomena - deixis, lexical cohesion, VP ellipsis, and ellipsis which affects NP inflection.",
"On average 0.64 "
] | 230 |
1705.05437 | A Biomedical Information Extraction Primer for NLP Researchers | Biomedical Information Extraction is an exciting field at the crossroads of Natural Language Processing, Biology and Medicine. It encompasses a variety of different tasks that require application of state-of-the-art NLP techniques, such as NER and Relation Extraction. This paper provides an overview of the problems in the field and discusses some of the techniques used for solving them. | {
"paragraphs": [
[
"The explosion of available scientific articles in the Biomedical domain has led to the rise of Biomedical Information Extraction (BioIE). BioIE systems aim to extract information from a wide spectrum of articles including medical literature, biological literature, electronic health records, etc. that can be used by clinicians and researchers in the field. Often the outputs of BioIE systems are used to assist in the creation of databases, or to suggest new paths for research. For example, a ranked list of interacting proteins that are extracted from biomedical literature, but are not present in existing databases, can allow researchers to make informed decisions about which protein/gene to study further. Interactions between drugs are necessary for clinicians who simultaneously administer multiple drugs to their patients. A database of diseases, treatments and tests is beneficial for doctors consulting in complicated medical cases.",
"The main problems in BioIE are similar to those in Information Extraction:",
"This paper discusses, in each section, various methods that have been adopted to solve the listed problems. Each section also highlights the difficulty of Information Extraction tasks in the biomedical domain.",
"This paper is intended as a primer to Biomedical Information Extraction for current NLP researchers. It aims to highlight the diversity of the various techniques from Information Extraction that have been applied in the Biomedical domain. The state of biomedical text mining is reviewed regularly. For more extensive surveys, consult BIBREF0 , BIBREF1 , BIBREF2 ."
],
[
"Named Entity Recognition (NER) in the Biomedical domain usually includes recognition of entities such as proteins, genes, diseases, treatments, drugs, etc. Fact extraction involves extraction of Named Entities from a corpus, usually given a certain ontology. When compared to NER in the domain of general text, the biomedical domain has some characteristic challenges:",
"Some of the earliest systems were heavily dependent on hand-crafted features. The method proposed in BIBREF4 for recognition of protein names in text does not require any prepared dictionary. The work gives examples of diversity in protein names and lists multiple rules depending on simple word features as well as POS tags.",
" BIBREF5 adopt a machine learning approach for NER. Their NER system extracts medical problems, tests and treatments from discharge summaries and progress notes. They use a semi-Conditional Random Field (semi-CRF) BIBREF6 to output labels over all tokens in the sentence. They use a variety of token, context and sentence level features. They also use some concept mapping features using existing annotation tools, as well as Brown clustering to form 128 clusters over the unlabelled data. The dataset used is the i2b2 2010 challenge dataset. Their system achieves an F-Score of 0.85. BIBREF7 is an incremental paper on NER taggers. It uses 3 types of word-representation techniques (Brown clustering, distributional clustering, word vectors) to improve performance of the NER Conditional Random Field tagger, and achieves marginal F-Score improvements.",
" BIBREF8 propose a boostrapping mechanism to bootstrap biomedical ontologies using NELL BIBREF9 , which uses a coupled semi-supervised bootstrapping approach to extract facts from text, given an ontology and a small number of “seed” examples for each category. This interesting approach (called BioNELL) uses an ontology of over 100 categories. In contrast to NELL, BioNELL does not contain any relations in the ontology. BioNELL is motivated by the fact that a lot of scientific literature available online is highly reliable due to peer-review. The authors note that the algorithm used by NELL to bootstrap fails in BioNELL due to ambiguities in biomedical literature, and heavy semantic drift. One of the causes for this is that often common words such as “white”, “dad”, “arm” are used as names of genes- this can easily result in semantic drift in one iteration of the bootstrapping. In order to mitigate this, they use Pointwise Mutual Information scores for corpus level statistics, which attributes a small score to common words. In addition, in contrast to NELL, BioNELL only uses high instances as seeds in the next iteration, but adds low ranking instances to the knowledge base. Since evaluation is not possible using Mechanical Turk or a small number of experts (due to the complexity of the task), they use Freebase BIBREF10 , a knowledge base that has some biomedical concepts as well. The lexicon learned using BioNELL is used to train an NER system. The system shows a very high precision, thereby showing that BioNELL learns very few ambiguous terms.",
"More recently, deep learning techniques have been developed to further enhance the performance of NER systems. BIBREF11 explore recurrent neural networks for the problem of NER in biomedical text."
],
[
"In Biomedical Information Extraction, Relation Extraction involves finding related entities of many different kinds. Some of these include protein-protein interactions, disease-gene relations and drug-drug interactions. Due to the explosion of available biomedical literature, it is impossible for one person to extract relevant relations from published material. Automatic extraction of relations assists in the process of database creation, by suggesting potentially related entities with links to the source article. For example, a database of drug-drug interactions is important for clinicians who administer multiple drugs simultaneously to their patients- it is imperative to know if one drug will have an adverse effect on the other. A variety of methods have been developed for relation extractions, and are often inspired by Relation Extraction in NLP tasks. These include rule-based approaches, hand-crafted patterns, feature-based and kernel machine learning methods, and more recently deep learning architectures. Relation Extraction systems over Biomedical Corpora are often affected by noisy extraction of entities, due to ambiguities in names of proteins, genes, drugs etc.",
" BIBREF12 was one of the first large scale Information Extraction efforts to study the feasibility of extraction of protein-protein interactions (such as “protein A activates protein B\") from Biomedical text. Using 8 hand-crafted regular expressions over a fixed vocabulary, the authors were able to achieve a recall of 30% for interactions present in The Dictionary of Interacting Proteins (DIP) from abstracts in Medline. The method did not differentiate between the type of relation. The reasons for the low recall were the inconsistency in protein nomenclature, information not present in the abstract, and due to specificity of the hand-crafted patterns. On a small subset of extracted relations, they found that about 60% were true interactions between proteins not present in DIP.",
" BIBREF13 combine sentence level relation extraction for protein interactions with corpus level statistics. Similar to BIBREF12 , they do not consider the type of interaction between proteins- only whether they interact in the general sense of the word. They also do not differentiate between genes and their protein products (which may share the same name). They use Pointwise Mutual Information (PMI) for corpus level statistics to determine whether a pair of proteins occur together by chance or because they interact. They combine this with a confidence aggregator that takes the maximum of the confidence of the extractor over all extractions for the same protein-pair. The extraction uses a subsequence kernel based on BIBREF14 . The integrated model, that combines PMI with aggregate confidence, gives the best performance. Kernel methods have widely been studied for Relation Extraction in Biomedical Literature. Common kernels used usually exploit linguistic information by utilising kernels based on the dependency tree BIBREF15 , BIBREF16 , BIBREF17 .",
" BIBREF18 look at the extraction of diseases and their relevant genes. They use a dictionary from six public databases to annotate genes and diseases in Medline abstracts. In their work, the authors note that when both genes and diseases are correctly identified, they are related in 94% of the cases. The problem then reduces to filtering incorrect matches using the dictionary, which occurs due to false positives resulting from ambiguities in the names as well as ambiguities in abbreviations. To this end, they train a Max-Ent based NER classifier for the task, and get a 26% gain in precision over the unfiltered baseline, with a slight hit in recall. They use POS tags, expanded forms of abbreviations, indicators for Greek letters as well as suffixes and prefixes commonly used in biomedical terms.",
" BIBREF19 adopt a supervised feature-based approach for the extraction of drug-drug interaction (DDI) for the DDI-2013 dataset BIBREF20 . They partition the data in subsets depending on the syntactic features, and train a different model for each. They use lexical, syntactic and verb based features on top of shallow parse features, in addition to a hand-crafted list of trigger words to define their features. An SVM classifier is then trained on the feature vectors, with a positive label if the drug pair interacts, and negative otherwise. Their method beats other systems on the DDI-2013 dataset. Some other feature-based approaches are described in BIBREF21 , BIBREF22 .",
"Distant supervision methods have also been applied to relation extraction over biomedical corpora. In BIBREF23 , 10,000 neuroscience articles are distantly supervised using information from UMLS Semantic Network to classify brain-gene relations into geneExpression and otherRelation. They use lexical (bag of words, contextual) features as well as syntactic (dependency parse features). They make the “at-least one” assumption, i.e. at least one of the sentences extracted for a given entity-pair contains the relation in database. They model it as a multi-instance learning problem and adopt a graphical model similar to BIBREF24 . They test using manually annotated examples. They note that the F-score achieved are much lesser than that achieved in the general domain in BIBREF24 , and attribute to generally poorer performance of NER tools in the biomedical domain, as well as less training examples. BIBREF25 explore distant supervision methods for protein-protein interaction extraction.",
"More recently, deep learning methods have been applied to relation extraction in the biomedical domain. One of the main advantages of such methods over traditional feature or kernel based learning methods is that they require minimal feature engineering. In BIBREF26 , skip-gram vectors BIBREF27 are trained over 5.6Gb of unlabelled text. They use these vectors to extract protein-protein interactions by converting them into features for entities, context and the entire sentence. Using an SVM for classification, their method is able to outperform many kernel and feature based methods over a variety of datasets.",
" BIBREF28 follow a similar method by using word vectors trained on PubMed articles. They use it for the task of relation extraction from clinical text for entities that include problem, treatment and medical test. For a given sentence, given labelled entities, they predict the type of relation exhibited (or None) by the entity pair. These types include “treatment caused medical problem”, “test conducted to investigate medical problem”, “medical problem indicates medical problems”, etc. They use a Convolutional Neural Network (CNN) followed by feedforward neural network architecture for prediction. In addition to pre-trained word vectors as features, for each token they also add features for POS tags, distance from both the entities in the sentence, as well BIO tags for the entities. Their model performs better than a feature based SVM baseline that they train themselves.",
"The BioNLP'16 Shared Tasks has also introduced some Relation Extraction tasks, in particular the BB3-event subtask that involves predicting whether a “lives-in” relation holds for a Bacteria in a location. Some of the top performing models for this task are deep learning models. BIBREF29 train word embeddings with six billions words of scientific texts from PubMed. They then consider the shortest dependency path between the two entities (Bacteria and location). For each token in the path, they use word embedding features, POS type embeddings and dependency type embeddings. They train a unidirectional LSTM BIBREF30 over the dependency path, that achieves an F-Score of 52.1% on the test set.",
" BIBREF31 improve the performance by making modifications to the above model. Instead of using the shortest dependency path, they modify the parse tree based on some pruning strategies. They also add feature embeddings for each token to represent the distance from the entities in the shortest path. They then train a Bidirectional LSTM on the path, and obtain an F-Score of 57.1%.",
"The recent success of deep learning models in Biomedical Relation Extraction that require minimal feature engineering is promising. This also suggests new avenues of research in the field. An approach as in BIBREF32 can be used to combine multi-instance learning and distant supervision with a neural architecture."
],
[
"Event Extraction in the Biomedical domain is a task that has gained more importance recently. Event Extraction goes beyond Relation Extraction. In Biomedical Event Extraction, events generally refer to a change in the state of biological molecules such as proteins and DNA. Generally, it includes detection of targeted event types such as gene expression, regulation, localisation and transcription. Each event type in addition can have multiple arguments that need to be detected. An additional layer of complexity comes from the fact that events can also be arguments of other events, giving rise to a nested structure. This helps to capture the underlying biology better BIBREF1 . Detecting the event type often involves recognising and classifying trigger words. Often, these words are verbs such as “activates”, “inhibits”, “phosphorylation” that may indicate a single, or sometimes multiple event types. In this section, we will discuss some of the successful models for Event Extraction in some detail.",
"Event Extraction gained a lot of interest with the availability of an annotated corpus with the BioNLP'09 Shared Task on Event Extraction BIBREF34 . The task involves prediction of trigger words over nine event types such as expression, transcription, catabolism, binding, etc. given only annotation of named entities (proteins, genes, etc.). For each event, its class, trigger expression and arguments need to be extracted. Since the events can be arguments to other events, the final output in general is a graph representation with events and named entities as nodes, and edges that correspond to event arguments. BIBREF33 present a pipeline based method that is heavily dependent on dependency parsing. Their pipeline approach consists of three steps: trigger detection, argument detection and semantic post-processing. While the first two components are learning based systems, the last component is a rule based system. For the BioNLP'09 corpus, only 5% of the events span multiple sentences. Hence the approach does not get affected severely by considering only single sentences. It is important to note that trigger words cannot simply be reduced to a dictionary lookup. This is because a specific word may belong to multiple classes, or may not always be a trigger word for an event. For example, “activate” is found to not be a trigger word in over 70% of the cases. A multi-class SVM is trained for trigger detection on each token, using a large feature set consisting of semantic and syntactic features. It is interesting to note that the hyperparameters of this classifier are optimised based on the performance of the entire end-to-end system.",
"For the second component to detect arguments, labels for edges between entities must be predicted. For the BioNLP'09 Shared Task, each directed edge from one event node to another event node, or from an event node to a named entity node are classified as “theme”, “cause”, or None. The second component of the pipeline makes these predictions independently. This is also trained using a multi-class SVM which involves heavy use of syntactic features, including the shortest dependency path between the nodes. The authors note that the precision-recall choice of the first component affects the performance of the second component: since the second component is only trained on Gold examples, any error by the first component will lead to a cascading of errors. The final component, which is a semantic post-processing step, consists of rules and heuristics to correct the output of the second component. Since the edge predictions are made independently, it is possible that some event nodes do not have any edges, or have an improper combination of edges. The rule based component corrects these and applies rules to break directed cycles in the graph, and some specific heuristics for different types of events. The final model gives a cumulative F-Score of 52% on the test set, and was the best model on the task.",
" BIBREF35 note that previous approaches on the task suffer due to the pipeline nature and the propagation of errors. To counter this, they adopt a joint inference method based on Markov Logic Networks BIBREF36 for the same task on BioNLP'09. The Markov Logic Network jointly predicts whether each token is a trigger word, and if yes, the class it belongs to; for each dependency edge, whether it is an argument path leading to a “theme” or a “cause”. By formulating the Event Extraction problem using an MLN, the approach becomes computationally feasible and only linear in the length of the sentence. They incorporate hard constraints to encode rules such as “an argument path must have an event”, “a cause path must start with a regulation event”, etc. In addition, they also include some domain specific soft constraints as well as some linguistically-motivated context-specific soft constraints. In order to train the MLN, stochastic gradient descent was used. Certain heuristic methods are implemented in order to deal with errors due to syntactic parsing, especially ambiguities in PP-attachment and coordination. Their final system is competitive and comes very close to the system by BIBREF33 with an average F-Score of 50%. To further improve the system, they suggest leveraging additional joint-inference opportunities and integrating the syntactic parser better. Some other more recent models for Biomedical Event Extraction include BIBREF37 , BIBREF38 ."
],
[
"We have discussed some of the major problems and challenges in BioIE, and seen some of the diverse approaches adopted to solve them. Some interesting problems such as Pathway Extraction for Biological Systems BIBREF39 , BIBREF40 have not been discussed.",
"Biomedical Information Extraction is a challenging and exciting field for NLP researchers that demands application of state-of-the-art methods. Traditionally, there has been a dependence on hand-crafted features or heavily feature-engineered methods. However, with the advent of deep learning methods, a lot of BioIE tasks are seeing an improvement by adopting deep learning models such as Convolutional Neural Networks and LSTMs, which require minimal feature engineering. Rapid progress in developing better systems for BioIE will be extremely helpful for clinicians and researchers in the Biomedical domain."
]
],
"section_name": [
"Introduction",
"Named Entity Recognition and Fact Extraction",
"Relation Extraction",
"Event Extraction",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"11c8c085055a57725aec482ac9b73ac028b75ef3",
"59a6c5c26c7d80b26bf7e5132500c091aea8d4c8"
],
"answer": [
{
"evidence": [
"Named Entity Recognition (NER) in the Biomedical domain usually includes recognition of entities such as proteins, genes, diseases, treatments, drugs, etc. Fact extraction involves extraction of Named Entities from a corpus, usually given a certain ontology. When compared to NER in the domain of general text, the biomedical domain has some characteristic challenges:"
],
"extractive_spans": [
"Named Entity Recognition"
],
"free_form_answer": "",
"highlighted_evidence": [
"Named Entity Recognition (NER) in the Biomedical domain usually includes recognition of entities such as proteins, genes, diseases, treatments, drugs, etc. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Named Entity Recognition (NER) in the Biomedical domain usually includes recognition of entities such as proteins, genes, diseases, treatments, drugs, etc. Fact extraction involves extraction of Named Entities from a corpus, usually given a certain ontology. When compared to NER in the domain of general text, the biomedical domain has some characteristic challenges:"
],
"extractive_spans": [],
"free_form_answer": "Named Entity Recognition, including entities such as proteins, genes, diseases, treatments, drugs, etc. in the biomedical domain",
"highlighted_evidence": [
"Named Entity Recognition (NER) in the Biomedical domain usually includes recognition of entities such as proteins, genes, diseases, treatments, drugs, etc."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"f320efb1fbb744616e420aaf8da0f9622b75b2ed",
"043654eefd60242ac8da08ddc1d4b8d73f86f653"
]
},
{
"annotation_id": [
"482b34cc6d8ca0cec7ba3e21662c46863c31bd6d"
],
"answer": [
{
"evidence": [
"BIBREF5 adopt a machine learning approach for NER. Their NER system extracts medical problems, tests and treatments from discharge summaries and progress notes. They use a semi-Conditional Random Field (semi-CRF) BIBREF6 to output labels over all tokens in the sentence. They use a variety of token, context and sentence level features. They also use some concept mapping features using existing annotation tools, as well as Brown clustering to form 128 clusters over the unlabelled data. The dataset used is the i2b2 2010 challenge dataset. Their system achieves an F-Score of 0.85. BIBREF7 is an incremental paper on NER taggers. It uses 3 types of word-representation techniques (Brown clustering, distributional clustering, word vectors) to improve performance of the NER Conditional Random Field tagger, and achieves marginal F-Score improvements."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"BIBREF5 adopt a machine learning approach for NER. Their NER system extracts medical problems, tests and treatments from discharge summaries and progress notes.",
"The dataset used is the i2b2 2010 challenge dataset. "
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"043654eefd60242ac8da08ddc1d4b8d73f86f653"
]
}
],
"nlp_background": [
"infinity",
"five"
],
"paper_read": [
"no",
"no"
],
"question": [
"What is NER?",
"Does the paper explore extraction from electronic health records?"
],
"question_id": [
"cf874cd9023d901e10aa8664b813d32501e7e4d2",
"42084c41343e5a6ae58a22e5bfc5ce987b5173de"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"",
"information extraction"
],
"topic_background": [
"research",
"familiar"
]
} | {
"caption": [
"Figure 1: An example of an input sentence with annotations, and expected output of the event extraction system (Borrowed from (Björne et al., 2009))"
],
"file": [
"4-Figure1-1.png"
]
} | [
"What is NER?"
] | [
[
"1705.05437-Named Entity Recognition and Fact Extraction-0"
]
] | [
"Named Entity Recognition, including entities such as proteins, genes, diseases, treatments, drugs, etc. in the biomedical domain"
] | 231 |
2001.08868 | Exploration Based Language Learning for Text-Based Games | This work presents an exploration and imitation-learning-based agent capable of state-of-the-art performance in playing text-based computer games. Text-based computer games describe their world to the player through natural language and expect the player to interact with the game using text. These games are of interest as they can be seen as a testbed for language understanding, problem-solving, and language generation by artificial agents. Moreover, they provide a learning environment in which these skills can be acquired through interactions with an environment rather than using fixed corpora. One aspect that makes these games particularly challenging for learning agents is the combinatorially large action space. Existing methods for solving text-based games are limited to games that are either very simple or have an action space restricted to a predetermined set of admissible actions. In this work, we propose to use the exploration approach of Go-Explore for solving text-based games. More specifically, in an initial exploration phase, we first extract trajectories with high rewards, after which we train a policy to solve the game by imitating these trajectories. Our experiments show that this approach outperforms existing solutions in solving text-based games, and it is more sample efficient in terms of the number of interactions with the environment. Moreover, we show that the learned policy can generalize better than existing solutions to unseen games without using any restriction on the action space. | {
"paragraphs": [
[
"Text-based games became popular in the mid 80s with the game series Zork BIBREF1 resulting in many different text-based games being produced and published BIBREF2. These games use a plain text description of the environment and the player has to interact with them by writing natural-language commands. Recently, there has been a growing interest in developing agents that can automatically solve text-based games BIBREF3 by interacting with them. These settings challenge the ability of an artificial agent to understand natural language, common sense knowledge, and to develop the ability to interact with environments using language BIBREF4, BIBREF5.",
"Since the actions in these games are commands that are in natural language form, the major obstacle is the extremely large action space of the agent, which leads to a combinatorially large exploration problem. In fact, with a vocabulary of $N$ words (e.g. 20K) and the possibility of producing sentences with at most $m$ words (e.g. 7 words), the total number of actions is $O(N^m)$ (e.g. 20K$^7 \\approx 1.28 e^{30}$). To avoid this large action space, several existing solutions focus on simpler text-based games with very small vocabularies where the action space is constrained to verb-object pairs BIBREF6, BIBREF7, BIBREF8, BIBREF9. Moreover, many existing works rely on using predetermined sets of admissible actions BIBREF10, BIBREF11, BIBREF12. However, a more ideal, and still under explored, alternative would be an agent that can operate in the full, unconstrained action space of natural language that can systematically generalize to new text-based games with no or few interactions with the environment.",
"To address this challenge, we propose to use the idea behind the recently proposed Go-Explore BIBREF0 algorithm. Specifically, we propose to first extract high reward trajectories of states and actions in the game using the exploration methodology proposed in Go-Explore and then train a policy using a Seq2Seq BIBREF13 model that maps observations to actions, in an imitation learning fashion. To show the effectiveness of our proposed methodology, we first benchmark the exploration ability of Go-Explore on the family of text-based games called CoinCollector BIBREF8. Then we use the 4,440 games of “First TextWorld Problems” BIBREF14, which are generated using the machinery introduced by BIBREF3, to show the generalization ability of our proposed methodology. In the former experiment we show that Go-Explore finds winning trajectories faster than existing solutions, and in the latter, we show that training a Seq2Seq model on the trajectories found by Go-Explore results in stronger generalization, as suggested by the stronger performance on unseen games, compared to existing competitive baselines BIBREF10, BIBREF7."
],
[
"Among reinforcement learning based efforts to solve text-based games two approaches are prominent. The first approach assumes an action as a sentence of a fixed number of words, and associates a separate $Q$-function BIBREF15, BIBREF16 with each word position in this sentence. This method was demonstrated with two-word sentences consisting of a verb-object pair (e.g. take apple) BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF17. In the second approach, one $Q$-function that scores all possible actions (i.e. sentences) is learned and used to play the game BIBREF10, BIBREF11, BIBREF12. The first approach is quite limiting since a fixed number of words must be selected in advance and no temporal dependency is enforced between words (e.g. lack of language modelling). In the second approach, on the other hand, the number of possible actions can become exponentially large if the admissible actions (a predetermined low cardinality set of actions that the agent can take) are not provided to the agent. A possible solution to this issue has been proposed by BIBREF18, where a hierarchical pointer-generator is used to first produce the set of admissible actions given the observation, and subsequently one element of this set is chosen as the action for that observation. However, in our experiments we show that even in settings where the true set of admissible actions is provided by the environment, a $Q$-scorer BIBREF10 does not generalize well in our setting (Section 5.2 Zero-Shot) and we would expect performance to degrade even further if the admissible actions were generated by a separate model. Less common are models that either learn to reduce a large set of actions into a smaller set of admissible actions by eliminating actions BIBREF12 or by compressing them in a latent space BIBREF11."
],
[
"In most text-based games rewards are sparse, since the size of the action space makes the probability of observing a reward extremely low when taking only random actions. Sparse reward environments are particularly challenging for reinforcement learning as they require longer term planning. Many exploration based solutions have been proposed to address the challenges associated with reward sparsity. Among these exploration approaches are novelty search BIBREF19, BIBREF20, BIBREF21, BIBREF22, BIBREF23, intrinsic motivation BIBREF24, BIBREF25, BIBREF26, and curiosity based rewards BIBREF27, BIBREF28, BIBREF29. For text based games exploration methods have been studied by BIBREF8, where the authors showed the effectiveness of the episodic discovery bonus BIBREF30 in environments with sparse rewards. This exploration method can only be applied in games with very small action and state spaces, since their counting methods rely on the state in its explicit raw form."
],
[
"Go-Explore BIBREF0 differs from the exploration-based algorithms discussed above in that it explicitly keeps track of under-explored areas of the state space and in that it utilizes the determinism of the simulator in order to return to those states, allowing it to explore sparse-reward environments in a sample efficient way (see BIBREF0 as well as section SECREF27). For the experiments in this paper we mainly focus on the final performance of our policy, not how that policy is trained, thus making Go-Explore a suitable algorithm for our experiments. Go-Explore is composed of two phases. In phase 1 (also referred to as the “exploration” phase) the algorithm explores the state space through keeping track of previously visited states by maintaining an archive. During this phase, instead of resuming the exploration from scratch, the algorithm starts exploring from promising states in the archive to find high performing trajectories. In phase 2 (also referred to as the “robustification” phase, while in our variant we will call it “generalization”) the algorithm trains a policy using the trajectories found in phase 1. Following this framework, which is also shown in Figure FIGREF56 (Appendix A.2), we define the Go-Explore phases for text-based games.",
"Let us first define text-based games using the same notation as BIBREF8. A text-based game can be framed as a discrete-time Partially Observable Markov Decision Process (POMDP) BIBREF31 defined by $(S, T, A, \\Omega , O, R)$, where: $S$ is the set of the environment states, $T$ is the state transition function that defines the next state probability, i.e. $T(s_{t+1}|a_t;s_t) \\forall s_t\\in S$, $A$ is the set of actions, which in our case is all the possible sequences of tokens, $\\Omega $ is the set of observations, i.e. text observed by the agent every time has it to take an action in the game (i.e. dialogue turn) which is controlled by the conditional observation probability $O$, i.e. $O(o_t|s_t, a_{t-1})$, and, finally, $R$ is the reward function i.e. $r=R(s,a)$.",
"Let us also define the observation $o_t \\in \\Omega $ and the action $a_t \\in A$. Text-based games provide some information in plain text at each turn and, without loss of generality, we define an observation $o_t$ as the sequence of tokens $\\lbrace o_t^0,\\cdots ,o_t^n\\rbrace $ that form that text. Similarly, we define the tokens of an action $a_t$ as the sequence $\\lbrace a_t^0,\\cdots ,a_t^m\\rbrace $. Furthermore, we define the set of admissible actions $\\mathcal {A}_t \\in A$ as $\\mathcal {A}_t=\\lbrace a_0,\\cdots ,a_z\\rbrace $, where each $a_i$, which is a sequence of tokens, is grammatically correct and admissible with reference to the observation $o_t$."
],
[
"In phase 1, Go-Explore builds an archive of cells, where a cell is defined as a set of observations that are mapped to the same, discrete representation by some mapping function $f(x)$. Each cell is associated with meta-data including the trajectory towards that cell, the length of that trajectory, and the cumulative reward of that trajectory. New cells are added to the archive when they are encountered in the environment, and existing cells are updated with new meta-data when the trajectory towards that cells is higher scoring or equal scoring but shorter.",
"At each iteration the algorithm selects a cell from this archive based on meta-data of the cell (e.g. the accumulated reward, etc.) and starts to randomly explore from the end of the trajectory associated with the selected cell. Phase 1 requires three components: the way that observations are embedded into cell representations, the cell selection, and the way actions are randomly selected when exploring from a selected cell. In our variant of the algorithm, $f(x)$ is defined as follows: given an observation, we compute the word embedding for each token in this observation, sum these embeddings, and then concatenate this sum with the current cumulative reward to construct the cell representation. The resulting vectors are subsequently compressed and discretized by binning them in order to map similar observations to the same cell. This way, the cell representation, which is the key of the archive, incorporates information about the current observation of the game. Adding the current cumulative reward to the cell representation is new to our Go-Explore variant as in the original algorithm only down-scaled image pixels were used. It turned out to be a very very effective to increase the speed at which high reward trajectories are discovered. In phase 1, we restrict the action space to the set of admissible actions $\\mathcal {A}_t$ that are provided by the game at every step of the game . This too is particularly important for the random search to find a high reward trajectory faster. Finally, we denote the trajectory found in phase 1 for game $g$ as $\\mathcal {T}_g= [(o_0, a_0, r_0), \\cdots , (o_t,a_t,r_t)]$."
],
[
"Phase 2 of Go-Explore uses the trajectories found in phase 1 and trains a policy based on those trajectories. The goal of this phase in the original Go-Explore algorithm is to turn the fragile policy of playing a trajectory of actions in sequence into a more robust, state-conditioned policy that can thus deal with environmental stochasticity. In our variant of the algorithm the purpose of the second phase is generalization: although in our environment there is no stochasticity, our goal is to learn a general policy that can be applied across different games and generalize to unseen games. In the original Go-Explore implementation, the authors used the backward Proximal Policy Optimization algorithm (PPO) BIBREF32, BIBREF33 to train this policy. In this work we opt for a simple but effective Seq2Seq imitation learning approach that does not use the reward directly in the loss. More specifically, given the trajectory $\\mathcal {T}_g$, we train a Seq2Seq model to minimize the negative log-likelihood of the action $a_t$ given the observation $o_t$. In other words, consider a word embedding matrix $E \\in \\mathbb {R}^{d \\times |V|}$ where $d$ is the embedding size and $|V|$ is the cardinality of the vocabulary, which maps the input token to an embedded vector. Then, we define an encoder LSTM$_{enc}$ and a decoder LSTM$_{dec}$. Every token $o_t$ from the trajectory $\\mathcal {T}_g$ is converted to its embedded representation using the embedding matrix $E$ and the sequence of these embedding vectors is passed through LSTM$_{enc}$:",
"The last hidden state $h_{|o_t|}^{enc}$ is used as the initial hidden state of the decoder which generates the action $a_t$ token by token. Specifically, given the sequence of hidden states $H \\in \\mathbb {R}^{d \\times |o_t|}$ of the encoder, tokens $a_t^j$ are generated as follows:",
"where $W \\in \\mathbb {R}^{2d \\times |V|}$ is a matrix that maps the decoder hidden state, concatenated with the context vector, into a vocabulary size vector. During training, the parameters of the model are trained by minimizing:",
"which is the sum of the negative log likelihood of each token in $a_t$ (using teacher forcing BIBREF34). However, at test time the model produces the sequence in an auto-regressive way BIBREF35 using greedy search."
],
[
"A set of commonly used standard benchmarks BIBREF8, BIBREF3, BIBREF7 for agents that play text-based games are simple games which require no more than two words in each step to solve the game, and have a very limited number of admissible actions per observation. While simplifying, this setting limits the agent's ability to fully express natural language and learn more complex ways to speak. In this paper, we embrace more challenging environments where multiple words are needed at each step to solve the games and the reward is particularly sparse. Hence, we have selected the following environments:",
"[leftmargin=*]",
"CoinCollector BIBREF8 is a class of text-based games where the objective is to find and collect a coin from a specific location in a given set of connected rooms . The agent wins the game after it collects the coin, at which point (for the first and only time) a reward of +1 is received by the agent. The environment parses only five admissible commands (go north, go east, go south, go west, and take coin) made by two worlds;",
"CookingWorld BIBREF14 in this challenge, there are 4,440 games with 222 different levels of difficulty, with 20 games per level of difficulty, each with different entities and maps. The goal of each game is to cook and eat food from a given recipe, which includes the task of collecting ingredients (e.g. tomato, potato, etc.), objects (e.g. knife), and processing them according to the recipe (e.g. cook potato, slice tomato, etc.). The parser of each game accepts 18 verbs and 51 entities with a predefined grammar, but the overall size of the vocabulary of the observations is 20,000. In Appendix SECREF36 we provide more details about the levels and the games' grammar.",
"In our experiments, we try to address two major research questions. First, we want to benchmark the exploration power of phase 1 of Go-Explore in comparison to existing exploration approaches used in text-based games. For this purpose, we generate 10 CoinCollector games with the hardest setting used by BIBREF8, i.e. hard-level 30 (refer to the Appendix SECREF36 for more information) and use them as a benchmark. In fact, CoinCollector requires many actions (at least 30 on hard games) to find a reward, which makes it suitable for testing the exploration capabilities of different algorithms. Secondly, we want to verify the generalization ability of our model in creating complex strategies using natural language. CoinCollector has a very limited action space, and is mainly designed to benchmark models on their capability of dealing with sparse rewards. Therefore we use the more complex CookingWorld games to evaluate the generalization capabilities of our proposed approach. We design three different settings for CookingWorld: 1) Single: treat each game independently, which means we train and test one agent for each game to evaluate how robust different models are across different games.; 2) Joint: training and testing a single policy on all the 4,440 CookingWorld games at the same time to verify that models can learn to play multiple games at the same time; 3) Zero-Shot: split the games into training, validation, and test sets, and then train our policy on the training games and test it on the unseen test games. This setting is the hardest among all, since it requires generalization to unseen games.",
"In both CoinCollector and CookingWorld games an observation $o_t$ provided by the environment consists of a room description $D$, inventory information $I$, quest $Q$, previous action $P$ and feedback $F$ provided in the previous turn. Table TABREF3 shows an example for each of these components. In our experiments for phase 1 of Go-Explore we only use $D$ as the observation."
],
[
"For the CoinCollector games, we compared Go-Explore with the episodic discovery bonus BIBREF30 that was used by BIBREF8 to improve two Q-learning-based baselines: DQN++ and DRQN++. We used the code provided by the authors and the same hyper-parameters.",
"For the CookingWorld games, we implemented three different treatments based on two existing methods:",
"[leftmargin=*]",
"LSTM-DQN BIBREF7, BIBREF8: An LSTM based state encoder with a separate $Q$-functions for each component (word) of a fixed pattern of Verb, Adjective1, Noun1, Adjective2, and Noun2. In this approach, given the observation $o_t$, the tokens are first converted into embeddings, then an LSTM is used to extract a sequence of hidden states $H_{dqn}\\in \\mathbb {R}^{d \\times |o_t|}$. A mean-pool layer is applied to $H_{dqn}$ to produce a single vector $h_{o^t}$ that represents the whole sequence. Next, a linear transformation $W_{\\text{type}}\\in \\mathbb {R}^{d \\times |V_{\\text{type}}|}$ is used to generate each of the Q values, where $|V_{\\text{type}}| \\ll |V|$ is the subset of the original vocabulary restricted to the word type of a particular game (e.g for Verb type: take, drop, etc.). Formally, we have:",
"Next, all the $Q$-functions are jointly trained using the DQN algorithm with $\\epsilon $-greedy exploration BIBREF15, BIBREF16. At evaluation time, the argmax of each $Q$-function is concatenated to produce $a_t$. Importantly, in $V_{\\text{type}}$ a special token $<$s$>$ is used to denote the absence of a word, so the model can produce actions with different lengths. Figure FIGREF57 in Appendix SECREF55 shows a depiction of this model.",
"LSTM-DQN+ADM: It is the same model as LSTM-DQN, except that the random actions for $\\epsilon $-greedy exploration are sampled from the set of admissible actions instead of creating them by sampling each word separately.",
"DRRN BIBREF10: In this approach a model learns how to score admissible actions instead of directly generating the action token by token. The policy uses an LSTM for encoding the observation and actions are represented as the sum of the embedding of the word tokens they contain. Then, the $Q$ value is defined as the dot product between the embedded representations of the observation and the action. Following the aforementioned notation, $h_{o^t}$ is generated as in the LSTM-DQN baseline. Next, we define its embedded representation as $c_i=\\sum _k^{|a_i|} E(a_i^k)$, where $E$ is an embedding matrix as in Equation 1. Thus, the $Q$-function is defined as:",
"At testing time the action with the highest $Q$ value is chosen. Figure FIGREF58 in Appendix SECREF55 shows a depiction of this model."
],
[
"In all the games the maximum number of steps has been set to 50. As mentioned earlier, the cell representation used in the Go-Explore archive is computed by the sum of embeddings of the room description tokens concatenated with the current cumulative reward. The sum of embeddings is computed using 50 dimensional pre-trained GloVe BIBREF36 vectors. In the CoinCollector baselines we use the same hyper-parameters as in the original paper. In CookingWorld all the baselines use pre-trained GloVe of dimension 100 for the single setting and 300 for the joint one. The LSTM hidden state has been set to 300 for all the models."
],
[
"In this setting, we compare the number of actions played in the environment (frames) and the score achieved by the agent (i.e. +1 reward if the coin is collected). In Go-Explore we also count the actions used to restore the environment to a selected cell, i.e. to bring the agent to the state represented in the selected cell. This allows a one-to-one comparison of the exploration efficiency between Go-Explore and algorithms that use a count-based reward in text-based games. Importantly, BIBREF8 showed that DQN and DRQN, without such counting rewards, could never find a successful trajectory in hard games such as the ones used in our experiments. Figure FIGREF17 shows the number of interactions with the environment (frames) versus the maximum score obtained, averaged over 10 games of the same difficulty. As shown by BIBREF8, DRQN++ finds a trajectory with the maximum score faster than to DQN++. On the other hand, phase 1 of Go-Explore finds an optimal trajectory with approximately half the interactions with the environment. Moreover, the trajectory length found by Go-Explore is always optimal (i.e. 30 steps) whereas both DQN++ and DRQN++ have an average length of 38 and 42 respectively."
],
[
"In CookingWorld, we compared models in the three settings mentioned earlier, namely, single, joint, and zero-shot. In all experiments, we measured the sum of the final scores of all the games and their trajectory length (number of steps). Table TABREF26 summarizes the results in these three settings. Phase 1 of Go-Explore on single games achieves a total score of 19,530 (sum over all games), which is very close to the maximum possible points (i.e. 19,882), with 47,562 steps. A winning trajectory was found in 4,279 out of the total of 4,440 games. This result confirms again that the exploration strategy of Go-Explore is effective in text-based games. Next, we evaluate the effectiveness and the generalization ability of the simple imitation learning policy trained using the extracted trajectories in phase 1 of Go-Explore in the three settings mentioned above."
],
[
"In this setting, each model is trained from scratch in each of the 4,440 games based on the trajectory found in phase 1 of Go-Explore (previous step). As shown in Table TABREF26, the LSTM-DQN BIBREF7, BIBREF8 approach without the use of admissible actions performs poorly. One explanation for this could be that it is difficult for this model to explore both language and game strategy at the same time; it is hard for the model to find a reward signal before it has learned to model language, since almost none of its actions will be admissible, and those reward signals are what is necessary in order to learn the language model. As we see in Table TABREF26, however, by using the admissible actions in the $\\epsilon $-greedy step the score achieved by the LSTM-DQN increases dramatically (+ADM row in Table TABREF26). DRRN BIBREF10 achieves a very high score, since it explicitly learns how to rank admissible actions (i.e. a much simpler task than generating text). Finally, our approach of using a Seq2Seq model trained on the single trajectory provided by phase 1 of Go-Explore achieves the highest score among all the methods, even though we do not use admissible actions in this phase. However, in this experiment the Seq2Seq model cannot perfectly replicate the provided trajectory and the total score that it achieves is in fact 9.4% lower compared to the total score achieved by phase 1 of Go-Explore. Figure FIGREF61 (in Appendix SECREF60) shows the score breakdown for each level and model, where we can see that the gap between our model and other methods increases as the games become harder in terms of skills needed."
],
[
"In this setting, a single model is trained on all the games at the same time, to test whether one agent can learn to play multiple games. Overall, as expected, all the evaluated models achieved a lower performance compared to the single game setting. One reason for this could be that learning multiple games at the same time leads to a situation where the agent encounters similar observations in different games, and the correct action to take in different games may be different. Furthermore, it is important to note that the order in which games are presented greatly affects the performance of LSTM-DQN and DRRN. In our experiments, we tried both an easy-to-hard curriculum (i.e. sorting the games by increasing level of difficulty) and a shuffled curriculum. Shuffling the games at each epoch resulted in far better performance, thus we only report the latter. In Figure FIGREF28 we show the score breakdown, and we can see that all the baselines quickly fail, even for easier games."
],
[
"In this setting the 4,440 games are split into training, validation, and test games. The split is done randomly but in a way that different difficulty levels (recipes 1, 2 and 3), are represented with equal ratios in all the 3 splits, i.e. stratified by difficulty. As shown in Table TABREF26, the zero-shot performance of the RL baselines is poor, which could be attributed to the same reasons why RL baselines under-perform in the Joint case. Especially interesting is that the performance of DRRN is substantially lower than that of the Go-Explore Seq2Seq model, even though the DRRN model has access to the admissible actions at test time, while the Seq2Seq model (as well as the LSTM-DQN model) has to construct actions token-by-token from the entire vocabulary of 20,000 tokens. On the other hand, Go-Explore Seq2Seq shows promising results by solving almost half of the unseen games. Figure FIGREF62 (in Appendix SECREF60) shows that most of the lost games are in the hardest set, where a very long sequence of actions is required for winning the game. These results demonstrate both the relative effectiveness of training a Seq2Seq model on Go-Explore trajectories, but they also indicate that additional effort needed for designing reinforcement learning algorithms that effectively generalize to unseen games."
],
[
"Experimental results show that our proposed Go-Explore exploration strategy is a viable methodology for extracting high-performing trajectories in text-based games. This method allows us to train supervised models that can outperform existing models in the experimental settings that we study. Finally, there are still several challenges and limitations that both our methodology and previous solutions do not fully address yet. For instance:"
],
[
"The state representation is the main limitation of our proposed imitation learning model. In fact, by examining the observations provided in different games, we notice a large overlap in the descriptions ($D$) of the games. This overlap leads to a situation where the policy receives very similar observations, but is expected to imitate two different actions. This show especially in the joint setting of CookingWorld, where the 222 games are repeated 20 times with different entities and room maps. In this work, we opted for a simple Seq2Seq model for our policy, since our goal is to show the effectiveness of our proposed exploration methods. However, a more complex Hierarchical-Seq2Seq model BIBREF37 or a better encoder representation based on knowledge graphs BIBREF38, BIBREF39 would likely improve the of performance this approach."
],
[
"In Go-Explore, the given admissible actions are used during random exploration. However, in more complex games, e.g. Zork I and in general the Z-Machine games, these admissible actions are not provided. In such settings, the action space would explode in size, and thus Go-Explore, even with an appropriate cell representation, would have a hard time finding good trajectories. To address this issue one could leverage general language models to produce a set of grammatically correct actions. Alternatively one could iteratively learn a policy to sample actions, while exploring with Go-Explore. Both strategies are viable, and a comparison is left to future work.",
"It is worth noting that a hand-tailored solution for the CookingWorld games has been proposed in the “First TextWorld Problems” competition BIBREF3. This solution managed to obtain up to 91.9% of the maximum possible score across the 514 test games on an unpublished dataset. However, this solution relies on entity extraction and template filling, which we believe limits its potential for generalization. Therefore, this approach should be viewed as complementary rather than competitor to our approach as it could potentially be used as an alternative way of getting promising trajectories."
],
[
"In this paper we presented a novel methodology for solving text-based games which first extracts high-performing trajectories using phase 1 of Go-Explore and then trains a simple Seq2Seq model that maps observations to actions using the extracted trajectories. Our experiments show promising results in three settings, with improved generalization and sample efficiency compared to existing methods. Finally, we discussed the limitations and possible improvements of our methodology, which leads to new research challenges in text-based games."
],
[
"In the hard setting (mode 2), each room on the path to the coin has two distractor rooms, and the level (e.g. 30) indicates the shortest path from the starting point to the coin room."
],
[
"The game's complexity is determined by the number of skills and the types of skills that an agent needs to master. The skills are:",
"recipe {1,2,3}: number of ingredients in the recipe",
"take {1,2,3}: number of ingredients to find (not already in the inventory)",
"open: whether containers/doors need to be opened",
"cook: whether ingredients need to be cooked",
"cut: whether ingredients need to be cut",
"drop: whether the inventory has limited capacity",
"go {1,6,9,12}: number of locations",
"Thus the hardest game would be a recipe with 3 ingredients, which must all be picked up somewhere across 12 locations and then need to be cut and cooked, and to get access to some locations, several doors or objects need to be opened. The handicap of a limited capacity in the inventory makes the game more difficult by requiring the agent to drop an object and later on take it again if needed. The grammar used for the text-based games is the following:",
"go, look, examine, inventory, eat, open/close, take/drop, put/insert",
"cook X with Y $\\longrightarrow $ grilled X (when Y is the BBQ)",
"cook X with Y $\\longrightarrow $ roasted X (when Y is the oven)",
"cook X with Y $\\longrightarrow $ fried X (when Y is the stove)",
"slice X with Y $\\longrightarrow $ sliced X",
"chop X with Y $\\longrightarrow $ chopped X",
"dice X with Y $\\longrightarrow $ diced X",
"prepare meal",
"Where Y is something sharp (e.g. knife)."
]
],
"section_name": [
"Introduction",
"Introduction ::: Reinforcement Learning Based Approaches for Text-Based Games",
"Introduction ::: Exploration in Reinforcement Learning",
"Methodology",
"Methodology ::: Phase 1: Exploration",
"Methodology ::: Phase 2: Generalization",
"Experiments ::: Games and experiments setup",
"Experiments ::: Baselines",
"Experiments ::: Hyper-parameters",
"Results ::: CoinCollector",
"Results ::: CookingWorld",
"Results ::: CookingWorld ::: Single",
"Results ::: CookingWorld ::: Joint",
"Results ::: CookingWorld ::: Zero-Shot",
"Discussion",
"Discussion ::: State Representation",
"Discussion ::: Language Based Exploration",
"Conclusion",
"Appendix ::: Text-Games ::: CoinCollector",
"Appendix ::: Text-Games ::: CookingWorld"
]
} | {
"answers": [
{
"annotation_id": [
"1b43327011240ccf3b1cfe1ba03ebf2fc708707a",
"e720a2127dd24d2b670c313ac615dac87f00e492"
],
"answer": [
{
"evidence": [
"Results ::: CoinCollector",
"In this setting, we compare the number of actions played in the environment (frames) and the score achieved by the agent (i.e. +1 reward if the coin is collected). In Go-Explore we also count the actions used to restore the environment to a selected cell, i.e. to bring the agent to the state represented in the selected cell. This allows a one-to-one comparison of the exploration efficiency between Go-Explore and algorithms that use a count-based reward in text-based games. Importantly, BIBREF8 showed that DQN and DRQN, without such counting rewards, could never find a successful trajectory in hard games such as the ones used in our experiments. Figure FIGREF17 shows the number of interactions with the environment (frames) versus the maximum score obtained, averaged over 10 games of the same difficulty. As shown by BIBREF8, DRQN++ finds a trajectory with the maximum score faster than to DQN++. On the other hand, phase 1 of Go-Explore finds an optimal trajectory with approximately half the interactions with the environment. Moreover, the trajectory length found by Go-Explore is always optimal (i.e. 30 steps) whereas both DQN++ and DRQN++ have an average length of 38 and 42 respectively.",
"In CookingWorld, we compared models in the three settings mentioned earlier, namely, single, joint, and zero-shot. In all experiments, we measured the sum of the final scores of all the games and their trajectory length (number of steps). Table TABREF26 summarizes the results in these three settings. Phase 1 of Go-Explore on single games achieves a total score of 19,530 (sum over all games), which is very close to the maximum possible points (i.e. 19,882), with 47,562 steps. A winning trajectory was found in 4,279 out of the total of 4,440 games. This result confirms again that the exploration strategy of Go-Explore is effective in text-based games. Next, we evaluate the effectiveness and the generalization ability of the simple imitation learning policy trained using the extracted trajectories in phase 1 of Go-Explore in the three settings mentioned above.",
"In this setting, each model is trained from scratch in each of the 4,440 games based on the trajectory found in phase 1 of Go-Explore (previous step). As shown in Table TABREF26, the LSTM-DQN BIBREF7, BIBREF8 approach without the use of admissible actions performs poorly. One explanation for this could be that it is difficult for this model to explore both language and game strategy at the same time; it is hard for the model to find a reward signal before it has learned to model language, since almost none of its actions will be admissible, and those reward signals are what is necessary in order to learn the language model. As we see in Table TABREF26, however, by using the admissible actions in the $\\epsilon $-greedy step the score achieved by the LSTM-DQN increases dramatically (+ADM row in Table TABREF26). DRRN BIBREF10 achieves a very high score, since it explicitly learns how to rank admissible actions (i.e. a much simpler task than generating text). Finally, our approach of using a Seq2Seq model trained on the single trajectory provided by phase 1 of Go-Explore achieves the highest score among all the methods, even though we do not use admissible actions in this phase. However, in this experiment the Seq2Seq model cannot perfectly replicate the provided trajectory and the total score that it achieves is in fact 9.4% lower compared to the total score achieved by phase 1 of Go-Explore. Figure FIGREF61 (in Appendix SECREF60) shows the score breakdown for each level and model, where we can see that the gap between our model and other methods increases as the games become harder in terms of skills needed.",
"In this setting the 4,440 games are split into training, validation, and test games. The split is done randomly but in a way that different difficulty levels (recipes 1, 2 and 3), are represented with equal ratios in all the 3 splits, i.e. stratified by difficulty. As shown in Table TABREF26, the zero-shot performance of the RL baselines is poor, which could be attributed to the same reasons why RL baselines under-perform in the Joint case. Especially interesting is that the performance of DRRN is substantially lower than that of the Go-Explore Seq2Seq model, even though the DRRN model has access to the admissible actions at test time, while the Seq2Seq model (as well as the LSTM-DQN model) has to construct actions token-by-token from the entire vocabulary of 20,000 tokens. On the other hand, Go-Explore Seq2Seq shows promising results by solving almost half of the unseen games. Figure FIGREF62 (in Appendix SECREF60) shows that most of the lost games are in the hardest set, where a very long sequence of actions is required for winning the game. These results demonstrate both the relative effectiveness of training a Seq2Seq model on Go-Explore trajectories, but they also indicate that additional effort needed for designing reinforcement learning algorithms that effectively generalize to unseen games."
],
"extractive_spans": [
" On the other hand, phase 1 of Go-Explore finds an optimal trajectory with approximately half the interactions with the environment",
"Moreover, the trajectory length found by Go-Explore is always optimal (i.e. 30 steps) whereas both DQN++ and DRQN++ have an average length of 38 and 42 respectively.",
"Especially interesting is that the performance of DRRN is substantially lower than that of the Go-Explore Seq2Seq model"
],
"free_form_answer": "",
"highlighted_evidence": [
"Results ::: CoinCollector\nIn this setting, we compare the number of actions played in the environment (frames) and the score achieved by the agent (i.e. +1 reward if the coin is collected). In Go-Explore we also count the actions used to restore the environment to a selected cell, i.e. to bring the agent to the state represented in the selected cell. This allows a one-to-one comparison of the exploration efficiency between Go-Explore and algorithms that use a count-based reward in text-based games. Importantly, BIBREF8 showed that DQN and DRQN, without such counting rewards, could never find a successful trajectory in hard games such as the ones used in our experiments. Figure FIGREF17 shows the number of interactions with the environment (frames) versus the maximum score obtained, averaged over 10 games of the same difficulty. As shown by BIBREF8, DRQN++ finds a trajectory with the maximum score faster than to DQN++. On the other hand, phase 1 of Go-Explore finds an optimal trajectory with approximately half the interactions with the environment. Moreover, the trajectory length found by Go-Explore is always optimal (i.e. 30 steps) whereas both DQN++ and DRQN++ have an average length of 38 and 42 respectively.",
"In case of Game CoinCollector, ",
"In CookingWorld, we compared models in the three settings mentioned earlier, namely, single, joint, and zero-shot. In all experiments, we measured the sum of the final scores of all the games and their trajectory length (number of steps). Table TABREF26 summarizes the results in these three settings. Phase 1 of Go-Explore on single games achieves a total score of 19,530 (sum over all games), which is very close to the maximum possible points (i.e. 19,882), with 47,562 steps. A winning trajectory was found in 4,279 out of the total of 4,440 games. This result confirms again that the exploration strategy of Go-Explore is effective in text-based games. Next, we evaluate the effectiveness and the generalization ability of the simple imitation learning policy trained using the extracted trajectories in phase 1 of Go-Explore in the three settings mentioned above.",
"In this setting, each model is trained from scratch in each of the 4,440 games based on the trajectory found in phase 1 of Go-Explore (previous step). As shown in Table TABREF26, the LSTM-DQN BIBREF7, BIBREF8 approach without the use of admissible actions performs poorly. One explanation for this could be that it is difficult for this model to explore both language and game strategy at the same time; it is hard for the model to find a reward signal before it has learned to model language, since almost none of its actions will be admissible, and those reward signals are what is necessary in order to learn the language model. As we see in Table TABREF26, however, by using the admissible actions in the $\\epsilon $-greedy step the score achieved by the LSTM-DQN increases dramatically (+ADM row in Table TABREF26). DRRN BIBREF10 achieves a very high score, since it explicitly learns how to rank admissible actions (i.e. a much simpler task than generating text). Finally, our approach of using a Seq2Seq model trained on the single trajectory provided by phase 1 of Go-Explore achieves the highest score among all the methods, even though we do not use admissible actions in this phase. However, in this experiment the Seq2Seq model cannot perfectly replicate the provided trajectory and the total score that it achieves is in fact 9.4% lower compared to the total score achieved by phase 1 of Go-Explore. Figure FIGREF61 (in Appendix SECREF60) shows the score breakdown for each level and model, where we can see that the gap between our model and other methods increases as the games become harder in terms of skills needed.",
"In this setting the 4,440 games are split into training, validation, and test games. The split is done randomly but in a way that different difficulty levels (recipes 1, 2 and 3), are represented with equal ratios in all the 3 splits, i.e. stratified by difficulty. As shown in Table TABREF26, the zero-shot performance of the RL baselines is poor, which could be attributed to the same reasons why RL baselines under-perform in the Joint case. Especially interesting is that the performance of DRRN is substantially lower than that of the Go-Explore Seq2Seq model, even though the DRRN model has access to the admissible actions at test time, while the Seq2Seq model (as well as the LSTM-DQN model) has to construct actions token-by-token from the entire vocabulary of 20,000 tokens. On the other hand, Go-Explore Seq2Seq shows promising results by solving almost half of the unseen games. Figure FIGREF62 (in Appendix SECREF60) shows that most of the lost games are in the hardest set, where a very long sequence of actions is required for winning the game. These results demonstrate both the relative effectiveness of training a Seq2Seq model on Go-Explore trajectories, but they also indicate that additional effort needed for designing reinforcement learning algorithms that effectively generalize to unseen games."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In this setting, we compare the number of actions played in the environment (frames) and the score achieved by the agent (i.e. +1 reward if the coin is collected). In Go-Explore we also count the actions used to restore the environment to a selected cell, i.e. to bring the agent to the state represented in the selected cell. This allows a one-to-one comparison of the exploration efficiency between Go-Explore and algorithms that use a count-based reward in text-based games. Importantly, BIBREF8 showed that DQN and DRQN, without such counting rewards, could never find a successful trajectory in hard games such as the ones used in our experiments. Figure FIGREF17 shows the number of interactions with the environment (frames) versus the maximum score obtained, averaged over 10 games of the same difficulty. As shown by BIBREF8, DRQN++ finds a trajectory with the maximum score faster than to DQN++. On the other hand, phase 1 of Go-Explore finds an optimal trajectory with approximately half the interactions with the environment. Moreover, the trajectory length found by Go-Explore is always optimal (i.e. 30 steps) whereas both DQN++ and DRQN++ have an average length of 38 and 42 respectively.",
"FLOAT SELECTED: Table 3: CookingWorld results on the three evaluated settings single, joint and zero-shot."
],
"extractive_spans": [],
"free_form_answer": "On Coin Collector, proposed model finds shorter path in fewer number of interactions with enironment.\nOn Cooking World, proposed model uses smallest amount of steps and on average has bigger score and number of wins by significant margin.",
"highlighted_evidence": [
"On the other hand, phase 1 of Go-Explore finds an optimal trajectory with approximately half the interactions with the environment. Moreover, the trajectory length found by Go-Explore is always optimal (i.e. 30 steps) whereas both DQN++ and DRQN++ have an average length of 38 and 42 respectively.",
"FLOAT SELECTED: Table 3: CookingWorld results on the three evaluated settings single, joint and zero-shot."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"5a118f309b75691ce73df349485f5231897d2932"
],
"answer": [
{
"evidence": [
"Go-Explore BIBREF0 differs from the exploration-based algorithms discussed above in that it explicitly keeps track of under-explored areas of the state space and in that it utilizes the determinism of the simulator in order to return to those states, allowing it to explore sparse-reward environments in a sample efficient way (see BIBREF0 as well as section SECREF27). For the experiments in this paper we mainly focus on the final performance of our policy, not how that policy is trained, thus making Go-Explore a suitable algorithm for our experiments. Go-Explore is composed of two phases. In phase 1 (also referred to as the “exploration” phase) the algorithm explores the state space through keeping track of previously visited states by maintaining an archive. During this phase, instead of resuming the exploration from scratch, the algorithm starts exploring from promising states in the archive to find high performing trajectories. In phase 2 (also referred to as the “robustification” phase, while in our variant we will call it “generalization”) the algorithm trains a policy using the trajectories found in phase 1. Following this framework, which is also shown in Figure FIGREF56 (Appendix A.2), we define the Go-Explore phases for text-based games."
],
"extractive_spans": [
"explores the state space through keeping track of previously visited states by maintaining an archive"
],
"free_form_answer": "",
"highlighted_evidence": [
"In phase 1 (also referred to as the “exploration” phase) the algorithm explores the state space through keeping track of previously visited states by maintaining an archive. During this phase, instead of resuming the exploration from scratch, the algorithm starts exploring from promising states in the archive to find high performing trajectories."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"15f6b90b2331952ecf61fac25a35c5bc5cd8e638",
"e50135879dff273808f0e4185144f4dd93816929"
],
"answer": [
{
"evidence": [
"CoinCollector BIBREF8 is a class of text-based games where the objective is to find and collect a coin from a specific location in a given set of connected rooms . The agent wins the game after it collects the coin, at which point (for the first and only time) a reward of +1 is received by the agent. The environment parses only five admissible commands (go north, go east, go south, go west, and take coin) made by two worlds;",
"CookingWorld BIBREF14 in this challenge, there are 4,440 games with 222 different levels of difficulty, with 20 games per level of difficulty, each with different entities and maps. The goal of each game is to cook and eat food from a given recipe, which includes the task of collecting ingredients (e.g. tomato, potato, etc.), objects (e.g. knife), and processing them according to the recipe (e.g. cook potato, slice tomato, etc.). The parser of each game accepts 18 verbs and 51 entities with a predefined grammar, but the overall size of the vocabulary of the observations is 20,000. In Appendix SECREF36 we provide more details about the levels and the games' grammar."
],
"extractive_spans": [
"CoinCollector ",
"CookingWorld "
],
"free_form_answer": "",
"highlighted_evidence": [
"CoinCollector BIBREF8 is a class of text-based games where the objective is to find and collect a coin from a specific location in a given set of connected rooms . The agent wins the game after it collects the coin, at which point (for the first and only time) a reward of +1 is received by the agent. The environment parses only five admissible commands (go north, go east, go south, go west, and take coin) made by two worlds;",
"CookingWorld BIBREF14 in this challenge, there are 4,440 games with 222 different levels of difficulty, with 20 games per level of difficulty, each with different entities and maps. The goal of each game is to cook and eat food from a given recipe, which includes the task of collecting ingredients (e.g. tomato, potato, etc.), objects (e.g. knife), and processing them according to the recipe (e.g. cook potato, slice tomato, etc.). The parser of each game accepts 18 verbs and 51 entities with a predefined grammar, but the overall size of the vocabulary of the observations is 20,000. In Appendix SECREF36 we provide more details about the levels and the games' grammar."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"CoinCollector BIBREF8 is a class of text-based games where the objective is to find and collect a coin from a specific location in a given set of connected rooms . The agent wins the game after it collects the coin, at which point (for the first and only time) a reward of +1 is received by the agent. The environment parses only five admissible commands (go north, go east, go south, go west, and take coin) made by two worlds;",
"CookingWorld BIBREF14 in this challenge, there are 4,440 games with 222 different levels of difficulty, with 20 games per level of difficulty, each with different entities and maps. The goal of each game is to cook and eat food from a given recipe, which includes the task of collecting ingredients (e.g. tomato, potato, etc.), objects (e.g. knife), and processing them according to the recipe (e.g. cook potato, slice tomato, etc.). The parser of each game accepts 18 verbs and 51 entities with a predefined grammar, but the overall size of the vocabulary of the observations is 20,000. In Appendix SECREF36 we provide more details about the levels and the games' grammar."
],
"extractive_spans": [
"CoinCollector",
"CookingWorld"
],
"free_form_answer": "",
"highlighted_evidence": [
"CoinCollector BIBREF8 is a class of text-based games where the objective is to find and collect a coin from a specific location in a given set of connected rooms .",
"CookingWorld BIBREF14 in this challenge, there are 4,440 games with 222 different levels of difficulty, with 20 games per level of difficulty, each with different entities and maps. The goal of each game is to cook and eat food from a given recipe, which includes the task of collecting ingredients (e.g. tomato, potato, etc.), objects (e.g. knife), and processing them according to the recipe (e.g. cook potato, slice tomato, etc.)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"60b76650dc5b690b7c4629ce9ca0dba0f9f1787e"
],
"answer": [
{
"evidence": [
"In this setting the 4,440 games are split into training, validation, and test games. The split is done randomly but in a way that different difficulty levels (recipes 1, 2 and 3), are represented with equal ratios in all the 3 splits, i.e. stratified by difficulty. As shown in Table TABREF26, the zero-shot performance of the RL baselines is poor, which could be attributed to the same reasons why RL baselines under-perform in the Joint case. Especially interesting is that the performance of DRRN is substantially lower than that of the Go-Explore Seq2Seq model, even though the DRRN model has access to the admissible actions at test time, while the Seq2Seq model (as well as the LSTM-DQN model) has to construct actions token-by-token from the entire vocabulary of 20,000 tokens. On the other hand, Go-Explore Seq2Seq shows promising results by solving almost half of the unseen games. Figure FIGREF62 (in Appendix SECREF60) shows that most of the lost games are in the hardest set, where a very long sequence of actions is required for winning the game. These results demonstrate both the relative effectiveness of training a Seq2Seq model on Go-Explore trajectories, but they also indicate that additional effort needed for designing reinforcement learning algorithms that effectively generalize to unseen games."
],
"extractive_spans": [
"promising results by solving almost half of the unseen games",
"most of the lost games are in the hardest set, where a very long sequence of actions is required for winning the game"
],
"free_form_answer": "",
"highlighted_evidence": [
"On the other hand, Go-Explore Seq2Seq shows promising results by solving almost half of the unseen games. Figure FIGREF62 (in Appendix SECREF60) shows that most of the lost games are in the hardest set, where a very long sequence of actions is required for winning the game. These results demonstrate both the relative effectiveness of training a Seq2Seq model on Go-Explore trajectories, but they also indicate that additional effort needed for designing reinforcement learning algorithms that effectively generalize to unseen games."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"zero",
"zero",
"zero",
"zero"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"How better does new approach behave than existing solutions?",
"How is trajectory with how rewards extracted?",
"On what Text-Based Games are experiments performed?",
"How do the authors show that their learned policy generalize better than existing solutions to unseen games?"
],
"question_id": [
"df0257ab04686ddf1c6c4d9b0529a7632330b98e",
"568fb7989a133564d84911e7cb58e4d8748243ef",
"2c947447d81252397839d58c75ebcc71b34379b5",
"c01784b995f6594fdb23d7b62f20a35ae73eaa77"
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"search_query": [
"computer vision",
"computer vision",
"computer vision",
"computer vision"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Table 1: Example of the observations provided by the CookingWorld environment.",
"Table 2: Statistics of the two families of text-based games used in the experiments. The average is among the different games in CookingWorld and among different instance of the same game for CoinCollector.",
"Figure 1: CoinCollector results of DQN++ and DRQN++ (Yuan et al., 2018) versus Go-Explore Phase1, i.e. just exploration.",
"Table 3: CookingWorld results on the three evaluated settings single, joint and zero-shot.",
"Figure 3: High level intuition of the Go-Exlore algorithm. Figure taken from Ecoffet et al. (2019) with permission.",
"Figure 4: LSTM-DQN high level schema.",
"Figure 7: Breakdown of results for the CookingWorld games in the Single setting. The results are normalized and sorted by increasing difficulty level from left to right, averaged among the 20 games of each level.",
"Figure 8: Breakdown of results for the CookingWorld games in the Zero-Shot setting. The results are normalized and sorted by increasing difficulty level from left to right, averaged among the 20 games of each level.",
"Figure 9: Single all the games",
"Figure 10: Joint all the games",
"Figure 11: Zero-Shot all the games"
],
"file": [
"3-Table1-1.png",
"5-Table2-1.png",
"6-Figure1-1.png",
"7-Table3-1.png",
"12-Figure3-1.png",
"13-Figure4-1.png",
"14-Figure7-1.png",
"14-Figure8-1.png",
"15-Figure9-1.png",
"16-Figure10-1.png",
"17-Figure11-1.png"
]
} | [
"How better does new approach behave than existing solutions?"
] | [
[
"2001.08868-Results ::: CookingWorld ::: Zero-Shot-0",
"2001.08868-Results ::: CookingWorld-0",
"2001.08868-Results ::: CoinCollector-0",
"2001.08868-Results ::: CookingWorld ::: Single-0",
"2001.08868-7-Table3-1.png"
]
] | [
"On Coin Collector, proposed model finds shorter path in fewer number of interactions with enironment.\nOn Cooking World, proposed model uses smallest amount of steps and on average has bigger score and number of wins by significant margin."
] | 234 |
1910.12795 | Learning Data Manipulation for Augmentation and Weighting | Manipulating data, such as weighting data examples or augmenting with new instances, has been increasingly used to improve model training. Previous work has studied various rule- or learning-based approaches designed for specific types of data manipulation. In this work, we propose a new method that supports learning different manipulation schemes with the same gradient-based algorithm. Our approach builds upon a recent connection of supervised learning and reinforcement learning (RL), and adapts an off-the-shelf reward learning algorithm from RL for joint data manipulation learning and model training. Different parameterization of the ``data reward'' function instantiates different manipulation schemes. We showcase data augmentation that learns a text transformation network, and data weighting that dynamically adapts the data sample importance. Experiments show the resulting algorithms significantly improve the image and text classification performance in low data regime and class-imbalance problems. | {
"paragraphs": [
[
"The performance of machines often crucially depend on the amount and quality of the data used for training. It has become increasingly ubiquitous to manipulate data to improve learning, especially in low data regime or in presence of low-quality datasets (e.g., imbalanced labels). For example, data augmentation applies label-preserving transformations on original data points to expand the data size; data weighting assigns an importance weight to each instance to adapt its effect on learning; and data synthesis generates entire artificial examples. Different types of manipulation can be suitable for different application settings.",
"Common data manipulation methods are usually designed manually, e.g., augmenting by flipping an image or replacing a word with synonyms, and weighting with inverse class frequency or loss values BIBREF0, BIBREF1. Recent work has studied automated approaches, such as learning the composition of augmentation operators with reinforcement learning BIBREF2, BIBREF3, deriving sample weights adaptively from a validation set via meta learning BIBREF4, or learning a weighting network by inducing a curriculum BIBREF5. These learning-based approaches have alleviated the engineering burden and produced impressive results. However, the algorithms are usually designed specifically for certain types of manipulation (e.g., either augmentation or weighting) and thus have limited application scope in practice.",
"In this work, we propose a new approach that enables learning for different manipulation schemes with the same single algorithm. Our approach draws inspiration from the recent work BIBREF6 that shows equivalence between the data in supervised learning and the reward function in reinforcement learning. We thus adapt an off-the-shelf reward learning algorithm BIBREF7 to the supervised setting for automated data manipulation. The marriage of the two paradigms results in a simple yet general algorithm, where various manipulation schemes are reduced to different parameterization of the data reward. Free parameters of manipulation are learned jointly with the target model through efficient gradient descent on validation examples. We demonstrate instantiations of the approach for automatically fine-tuning an augmentation network and learning data weights, respectively.",
"We conduct extensive experiments on text and image classification in challenging situations of very limited data and imbalanced labels. Both augmentation and weighting by our approach significantly improve over strong base models, even though the models are initialized with large-scale pretrained networks such as BERT BIBREF8 for text and ResNet BIBREF9 for images. Our approach, besides its generality, also outperforms a variety of dedicated rule- and learning-based methods for either augmentation or weighting, respectively. Lastly, we observe that the two types of manipulation tend to excel in different contexts: augmentation shows superiority over weighting with a small amount of data available, while weighting is better at addressing class imbalance problems.",
"The way we derive the manipulation algorithm represents a general means of problem solving through algorithm extrapolation between learning paradigms, which we discuss more in section SECREF6."
],
[
"Rich types of data manipulation have been increasingly used in modern machine learning pipelines. Previous work each has typically focused on a particular manipulation type. Data augmentation that perturbs examples without changing the labels is widely used especially in vision BIBREF10, BIBREF11 and speech BIBREF12, BIBREF13 domains. Common heuristic-based methods on images include cropping, mirroring, rotation BIBREF11, and so forth. Recent work has developed automated augmentation approaches BIBREF3, BIBREF2, BIBREF14, BIBREF15, BIBREF16. BIBREF17 additionally use large-scale unlabeled data. BIBREF3, BIBREF2 learn to induce the composition of data transformation operators. Instead of treating data augmentation as a policy in reinforcement learning BIBREF3, we formulate manipulation as a reward function and use efficient stochastic gradient descent to learn the manipulation parameters. Text data augmentation has also achieved impressive success, such as contextual augmentation BIBREF18, BIBREF19, back-translation BIBREF20, and manual approaches BIBREF21, BIBREF22. In addition to perturbing the input text as in classification tasks, text generation problems expose opportunities to adding noise also in the output text, such as BIBREF23, BIBREF24. Recent work BIBREF6 shows output nosing in sequence generation can be treated as an intermediate approach in between supervised learning and reinforcement learning, and developed a new sequence learning algorithm that interpolates between the spectrum of existing algorithms. We instantiate our approach for text contextual augmentation as in BIBREF18, BIBREF19, but enhance the previous work by additionally fine-tuning the augmentation network jointly with the target model.",
"Data weighting has been used in various algorithms, such as AdaBoost BIBREF0, self-paced learning BIBREF25, hard-example mining BIBREF26, and others BIBREF27, BIBREF28. These algorithms largely define sample weights based on training loss. Recent work BIBREF5, BIBREF29 learns a separate network to predict sample weights. Of particular relevance to our work is BIBREF4 which induces sample weights using a validation set. The data weighting mechanism instantiated by our framework has a key difference in that samples weights are treated as parameters that are updated iteratively, instead of re-estimated from scratch at each step. We show improved performance of our approach. Besides, our data manipulation approach is derived based on a different perspective of reward learning, instead of meta-learning as in BIBREF4.",
"Another popular type of data manipulation involves data synthesis, which creates entire artificial samples from scratch. GAN-based approaches have achieved impressive results for synthesizing conditional image data BIBREF30, BIBREF31. In the text domain, controllable text generation BIBREF32 presents a way of co-training the data generator and classifier in a cyclic manner within a joint VAE BIBREF33 and wake-sleep BIBREF34 framework. It is interesting to explore the instantiation of the present approach for adaptive data synthesis in the future."
],
[
"We first present the relevant work upon which our automated data manipulation is built. This section also establishes the notations used throughout the paper.",
"Let $\\mathbf {x}$ denote the input and $y$ the output. For example, in text classification, $\\mathbf {x}$ can be a sentence and $y$ is the sentence label. Denote the model of interest as $p_\\theta (y|\\mathbf {x})$, where $\\mathbf {\\theta }$ is the model parameters to be learned. In supervised setting, given a set of training examples $\\mathcal {D}=\\lbrace (\\mathbf {x}^*, y^*)\\rbrace $, we learn the model by maximizing the data log-likelihood."
],
[
"The recent work BIBREF6 introduced a unifying perspective of reformulating maximum likelihood supervised learning as a special instance of a policy optimization framework. In this perspective, data examples providing supervision signals are equivalent to a specialized reward function. Since the original framework BIBREF6 was derived for sequence generation problems, here we present a slightly adapted formulation for our context of data manipulation.",
"To connect the maximum likelihood supervised learning with policy optimization, consider the model $p_\\theta (y|\\mathbf {x})$ as a policy that takes “action” $y$ given the “state” $\\mathbf {x}$. Let $R(\\mathbf {x}, y |\\mathcal {D})\\in \\mathbb {R}$ denote a reward function, and $p(\\mathbf {x})$ be the empirical data distribution which is known given $\\mathcal {D}$. Further assume a variational distribution $q(\\mathbf {x}, y)$ that factorizes as $q(\\mathbf {x},y)=p(\\mathbf {x})q(y|\\mathbf {x})$. A variational policy optimization objective is then written as:",
"where $\\text{KL}(\\cdot \\Vert \\cdot )$ is the Kullback–Leibler divergence; $\\text{H}(\\cdot )$ is the Shannon entropy; and $\\alpha ,\\beta >0$ are balancing weights. The objective is in the same form with the RL-as-inference formalism of policy optimization BIBREF35, BIBREF36, BIBREF37. Intuitively, the objective maximizes the expected reward under $q$, and enforces the model $p_\\theta $ to stay close to $q$, with a maximum entropy regularization over $q$. The problem is solved with an EM procedure that optimizes $q$ and $\\mathbf {\\theta }$ alternatingly:",
"where $Z$ is the normalization term. With the established framework, it is easy to show that the above optimization procedure reduces to maximum likelihood learning by taking $\\alpha \\rightarrow 0, \\beta =1$, and the reward function:",
"That is, a sample $(\\mathbf {x}, y)$ receives a unit reward only when it matches a training example in the dataset, while the reward is negative infinite in all other cases. To make the equivalence to maximum likelihood learning clearer, note that the above M-step now reduces to",
"where the joint distribution $p(\\mathbf {x})\\exp \\lbrace R_\\delta \\rbrace /Z$ equals the empirical data distribution, which means the M-step is in fact maximizing the data log-likelihood of the model $p_\\theta $."
],
[
"There is a rich line of research on learning the reward in reinforcement learning. Of particular interest to this work is BIBREF7 which learns a parametric intrinsic reward that additively transforms the original task reward (a.k.a extrinsic reward) to improve the policy optimization. For consistency of notations with above, formally, let $p_\\theta (y|\\mathbf {x})$ be a policy where $y$ is an action and $\\mathbf {x}$ is a state. Let $R_\\phi ^{in}$ be the intrinsic reward with parameters $\\mathbf {\\phi }$. In each iteration, the policy parameter $\\mathbf {\\theta }$ is updated to maximize the joint rewards, through:",
"where $\\mathcal {L}^{ex+in}$ is the expectation of the sum of extrinsic and intrinsic rewards; and $\\gamma $ is the step size. The equation shows $\\mathbf {\\theta }^{\\prime }$ depends on $\\mathbf {\\phi }$, thus we can write as $\\mathbf {\\theta }^{\\prime }=\\mathbf {\\theta }^{\\prime }(\\mathbf {\\phi })$.",
"The next step is to optimize the intrinsic reward parameters $\\mathbf {\\phi }$. Recall that the ultimate measure of the performance of a policy is the value of extrinsic reward it achieves. Therefore, a good intrinsic reward is supposed to, when the policy is trained with it, increase the eventual extrinsic reward. The update to $\\mathbf {\\phi }$ is then written as:",
"That is, we want the expected extrinsic reward $\\mathcal {L}^{ex}(\\mathbf {\\theta }^{\\prime })$ of the new policy $\\mathbf {\\theta }^{\\prime }$ to be maximized. Since $\\mathbf {\\theta }^{\\prime }$ is a function of $\\mathbf {\\phi }$, we can directly backpropagate the gradient through $\\mathbf {\\theta }^{\\prime }$ to $\\mathbf {\\phi }$."
],
[
"We now develop our approach of learning data manipulation, through a novel marriage of supervised learning and the above reward learning. Specifically, from the policy optimization perspective, due to the $\\delta $-function reward (Eq.DISPLAY_FORM4), the standard maximum likelihood learning is restricted to use only the exact training examples $\\mathcal {D}$ in a uniform way. A natural idea of enabling data manipulation is to relax the strong restrictions of the $\\delta $-function reward and instead use a relaxed reward $R_\\phi (\\mathbf {x}, y | \\mathcal {D})$ with parameters $\\mathbf {\\phi }$. The relaxed reward can be parameterized in various ways, resulting in different types of manipulation. For example, when a sample $(\\mathbf {x}, y)$ matches a data instance, instead of returning constant 1 by $R_\\delta $, the new $R_\\phi $ can return varying reward values depending on the matched instance, resulting in a data weighting scheme. Alternatively, $R_\\phi $ can return a valid reward even when $\\mathbf {x}$ matches a data example only in part, or $(\\mathbf {x}, y)$ is an entire new sample not in $\\mathcal {D}$, which in effect makes data augmentation and data synthesis, respectively, in which cases $\\mathbf {\\phi }$ is either a data transformer or a generator. In the next section, we demonstrate two particular parameterizations for data augmentation and weighting, respectively.",
"We thus have shown that the diverse types of manipulation all boil down to a parameterized data reward $R_\\phi $. Such an concise, uniform formulation of data manipulation has the advantage that, once we devise a method of learning the manipulation parameters $\\mathbf {\\phi }$, the resulting algorithm can directly be applied to automate any manipulation type. We present a learning algorithm next."
],
[
"To learn the parameters $\\mathbf {\\phi }$ in the manipulation reward $R_\\phi (\\mathbf {x}, y | \\mathcal {D})$, we could in principle adopt any off-the-shelf reward learning algorithm in the literature. In this work, we draw inspiration from the above gradient-based reward learning (section SECREF3) due to its simplicity and efficiency. Briefly, the objective of $\\mathbf {\\phi }$ is to maximize the ultimate measure of the performance of model $p_\\theta (\\mathbf {y}|\\mathbf {x})$, which, in the context of supervised learning, is the model performance on a held-out validation set.",
"The algorithm optimizes $\\mathbf {\\theta }$ and $\\mathbf {\\phi }$ alternatingly, corresponding to Eq.(DISPLAY_FORM7) and Eq.(DISPLAY_FORM8), respectively. More concretely, in each iteration, we first update the model parameters $\\mathbf {\\theta }$ in analogue to Eq.(DISPLAY_FORM7) which optimizes intrinsic reward-enriched objective. Here, we optimize the log-likelihood of the training set enriched with data manipulation. That is, we replace $R_\\delta $ with $R_\\phi $ in Eq.(DISPLAY_FORM5), and obtain the augmented M-step:",
"By noticing that the new $\\mathbf {\\theta }^{\\prime }$ depends on $\\mathbf {\\phi }$, we can write $\\mathbf {\\theta }^{\\prime }$ as a function of $\\mathbf {\\phi }$, namely, $\\mathbf {\\theta }^{\\prime }=\\mathbf {\\theta }^{\\prime }(\\mathbf {\\phi })$. The practical implementation of the above update depends on the actual parameterization of manipulation $R_\\phi $, which we discuss in more details in the next section.",
"The next step is to optimize $\\mathbf {\\phi }$ in terms of the model validation performance, in analogue to Eq.(DISPLAY_FORM8). Formally, let $\\mathcal {D}^v$ be the validation set of data examples. The update is then:",
"where, since $\\mathbf {\\theta }^{\\prime }$ is a function of $\\mathbf {\\phi }$, the gradient is backpropagated to $\\mathbf {\\phi }$ through $\\mathbf {\\theta }^{\\prime }(\\mathbf {\\phi })$. Taking data weighting for example where $\\mathbf {\\phi }$ is the training sample weights (more details in section SECREF15), the update is to optimize the weights of training samples so that the model performs best on the validation set.",
"The resulting algorithm is summarized in Algorithm FIGREF11. Figure FIGREF11 illustrates the computation flow. Learning the manipulation parameters effectively uses a held-out validation set. We show in our experiments that a very small set of validation examples (e.g., 2 labels per class) is enough to significantly improve the model performance in low data regime.",
"It is worth noting that some previous work has also leveraged validation examples, such as learning data augmentation with policy gradient BIBREF3 or inducing data weights with meta-learning BIBREF4. Our approach is inspired from a distinct paradigm of (intrinsic) reward learning. In contrast to BIBREF3 that treats data augmentation as a policy, we instead formulate manipulation as a reward function and enable efficient stochastic gradient updates. Our approach is also more broadly applicable to diverse data manipulation types than BIBREF4, BIBREF3."
],
[
"As a case study, we show two parameterizations of $R_\\phi $ which instantiate distinct data manipulation schemes. The first example learns augmentation for text data, a domain that has been less studied in the literature compared to vision and speech BIBREF18, BIBREF38. The second instance focuses on automated data weighting, which is applicable to any data domains."
],
[
"The recent work BIBREF18, BIBREF19 developed a novel contextual augmentation approach for text data, in which a powerful pretrained language model (LM), such as BERT BIBREF8, is used to generate substitutions of words in a sentence. Specifically, given an observed sentence $\\mathbf {x}^*$, the method first randomly masks out a few words. The masked sentence is then fed to BERT which fills the masked positions with new words. To preserve the original sentence class, the BERT LM is retrofitted as a label-conditional model, and trained on the task training examples. The resulting model is then fixed and used to augment data during the training of target model. We denote the augmentation distribution as $g_{\\phi _0}(\\mathbf {x}|\\mathbf {x}^*, \\mathbf {y}^*)$, where $\\mathbf {\\phi }_0$ is the fixed BERT LM parameters.",
"The above process has two drawbacks. First, the LM is fixed after fitting to the task data. In the subsequent phase of training the target model, the LM augments data without knowing the state of the target model, which can lead to sub-optimal results. Second, in the cases where the task dataset is small, the LM can be insufficiently trained for preserving the labels faithfully, resulting in noisy augmented samples.",
"To address the difficulties, it is beneficial to apply the proposed learning data manipulation algorithm to additionally fine-tune the LM jointly with target model training. As discussed in section SECREF4, this reduces to properly parameterizing the data reward function:",
"That is, a sample $(\\mathbf {x}, y)$ receives a unit reward when $y$ is the true label and $\\mathbf {x}$ is the augmented sample by the LM (instead of the exact original data $\\mathbf {x}^*$). Plugging the reward into Eq.(DISPLAY_FORM13), we obtain the data-augmented update for the model parameters:",
"That is, we pick an example from the training set, and use the LM to create augmented samples, which are then used to update the target model. Regarding the update of augmentation parameters $\\mathbf {\\phi }$ (Eq.DISPLAY_FORM14), since text samples are discrete, to enable efficient gradient propagation through $\\mathbf {\\theta }^{\\prime }$ to $\\mathbf {\\phi }$, we use a gumbel-softmax approximation BIBREF39 to $\\mathbf {x}$ when sampling substitution words from the LM."
],
[
"We now demonstrate the instantiation of data weighting. We aim to assign an importance weight to each training example to adapt its effect on model training. We automate the process by learning the data weights. This is achieved by parameterizing $R_\\phi $ as:",
"where $\\phi _i\\in \\mathbb {R}$ is the weight associated with the $i$th example. Plugging $R^{w}_\\phi $ into Eq.(DISPLAY_FORM13), we obtain the weighted update for the model $\\mathbf {\\theta }$:",
"In practice, when minibatch stochastic optimization is used, we approximate the weighted sampling by taking the softmax over the weights of only the minibatch examples. The data weights $\\mathbf {\\phi }$ are updated with Eq.(DISPLAY_FORM14). It is worth noting that the previous work BIBREF4 similarly derives data weights based on their gradient directions on a validation set. Our algorithm differs in that the data weights are parameters maintained and updated throughout the training, instead of re-estimated from scratch in each iteration. Experiments show the parametric treatment achieves superior performance in various settings. There are alternative parameterizations of $R_\\phi $ other than Eq.(DISPLAY_FORM20). For example, replacing $\\phi _i$ in Eq.(DISPLAY_FORM20) with $\\log \\phi _i$ in effect changes the softmax normalization in Eq.(DISPLAY_FORM21) to linear normalization, which is used in BIBREF4."
],
[
"We empirically validate the proposed data manipulation approach through extensive experiments on learning augmentation and weighting. We study both text and image classification, in two difficult settings of low data regime and imbalanced labels."
],
[
"Base Models. We choose strong pretrained networks as our base models for both text and image classification. Specifically, on text data, we use the BERT (base, uncased) model BIBREF8; while on image data, we use ResNet-34 BIBREF9 pretrained on ImageNet. We show that, even with the large-scale pretraining, data manipulation can still be very helpful to boost the model performance on downstream tasks. Since our approach uses validation sets for manipulation parameter learning, for a fair comparison with the base model, we train the base model in two ways. The first is to train the model on the training sets as usual and select the best step using the validation sets; the second is to train on the merged training and validation sets for a fixed number of steps. The step number is set to the average number of steps selected in the first method. We report the results of both methods.",
"Comparison Methods. We compare our approach with a variety of previous methods that were designed for specific manipulation schemes: (1) For text data augmentation, we compare with the latest model-based augmentation BIBREF19 which uses a fixed conditional BERT language model for word substitution (section SECREF15). As with base models, we also tried fitting the augmentatin model to both the training data and the joint training-validation data, and did not observe significant difference. Following BIBREF19, we also study a conventional approach that replaces words with their synonyms using WordNet BIBREF40. (2) For data weighting, we compare with the state-of-the-art approach BIBREF4 that dynamically re-estimates sample weights in each iteration based on the validation set gradient directions. We follow BIBREF4 and also evaluate the commonly-used proportion method that weights data by inverse class frequency.",
"Training. For both the BERT classifier and the augmentation model (which is also based on BERT), we use Adam optimization with an initial learning rate of 4e-5. For ResNets, we use SGD optimization with a learning rate of 1e-3. For text data augmentation, we augment each minibatch by generating two or three samples for each data points (each with 1, 2 or 3 substitutions), and use both the samples and the original data to train the model. For data weighting, to avoid exploding value, we update the weight of each data point in a minibatch by decaying the previous weight value with a factor of 0.1 and then adding the gradient. All experiments were implemented with PyTorch (pytorch.org) and were performed on a Linux machine with 4 GTX 1080Ti GPUs and 64GB RAM. All reported results are averaged over 15 runs $\\pm $ one standard deviation."
],
[
"We study the problem where only very few labeled examples for each class are available. Both of our augmentation and weighting boost base model performance, and are superior to respective comparison methods. We also observe that augmentation performs better than weighting in the low-data setting."
],
[
"For text classification, we use the popular benchmark datasets, including SST-5 for 5-class sentence sentiment BIBREF41, IMDB for binary movie review sentiment BIBREF42, and TREC for 6-class question types BIBREF43. We subsample a small training set on each task by randomly picking 40 instances for each class. We further create small validation sets, i.e., 2 instances per class for SST-5, and 5 instances per class for IMDB and TREC, respectively. The reason we use slightly more validation examples on IMDB and TREC is that the model can easily achieve 100% validation accuracy if the validation sets are too small. Thus, the SST-5 task has 210 labeled examples in total, while IMDB has 90 labels and TREC has 270. Such extremely small datasets pose significant challenges for learning deep neural networks. Since the manipulation parameters are trained using the small validation sets, to avoid possible overfitting we restrict the training to small number (e.g., 5 or 10) of epochs. For image classification, we similarly create a small subset of the CIFAR10 data, which includes 40 instances per class for training, and 2 instances per class for validation."
],
[
"Table TABREF26 shows the manipulation results on text classification. For data augmentation, our approach significantly improves over the base model on all the three datasets. Besides, compared to both the conventional synonym substitution and the approach that keeps the augmentation network fixed, our adaptive method that fine-tunes the augmentation network jointly with model training achieves superior results. Indeed, the heuristic-based synonym approach can sometimes harm the model performance (e.g., SST-5 and IMDB), as also observed in previous work BIBREF19, BIBREF18. This can be because the heuristic rules do not fit the task or datasets well. In contrast, learning-based augmentation has the advantage of adaptively generating useful samples to improve model training.",
"Table TABREF26 also shows the data weighting results. Our weight learning consistently improves over the base model and the latest weighting method BIBREF4. In particular, instead of re-estimating sample weights from scratch in each iteration BIBREF4, our approach treats the weights as manipulation parameters maintained throughout the training. We speculate that the parametric treatment can adapt weights more smoothly and provide historical information, which is beneficial in the small-data context.",
"It is interesting to see from Table TABREF26 that our augmentation method consistently outperforms the weighting method, showing that data augmentation can be a more suitable technique than data weighting for manipulating small-size data. Our approach provides the generality to instantiate diverse manipulation types and learn with the same single procedure.",
"To investigate the augmentation model and how the fine-tuning affects the augmentation results, we show in Figure TABREF27 the top-5 most probable word substitutions predicted by the augmentation model for two masked tokens, respectively. Comparing the results of epoch 1 and epoch 3, we can see the augmentation model evolves and dynamically adjusts the augmentation behavior as the training proceeds. Through fine-tuning, the model seems to make substitutions that are more coherent with the conditioning label and relevant to the original words (e.g., replacing the word “striking” with “bland” in epoch 1 v.s. “charming” in epoch 3).",
"Table TABREF27 shows the data weighting results on image classification. We evaluate two settings with the ResNet-34 base model being initialized randomly or with pretrained weights, respectively. Our data weighting consistently improves over the base model and BIBREF4 regardless of the initialization."
],
[
"We next study a different problem setting where the training data of different classes are imbalanced. We show the data weighting approach greatly improves the classification performance. It is also observed that, the LM data augmentation approach, which performs well in the low-data setting, fails on the class-imbalance problems."
],
[
"Though the methods are broadly applicable to multi-way classification problems, here we only study binary classification tasks for simplicity. For text classification, we use the SST-2 sentiment analysis benchmark BIBREF41; while for image, we select class 1 and 2 from CIFAR10 for binary classification. We use the same processing on both datasets to build the class-imbalance setting. Specifically, we randomly select 1,000 training instances of class 2, and vary the number of class-1 instances in $\\lbrace 20, 50, 100\\rbrace $. For each dataset, we use 10 validation examples in each class. Trained models are evaluated on the full binary-class test set."
],
[
"Table TABREF29 shows the classification results on SST-2 with varying imbalance ratios. We can see our data weighting performs best across all settings. In particular, the improvement over the base model increases as the data gets more imbalanced, ranging from around 6 accuracy points on 100:1000 to over 20 accuracy points on 20:1000. Our method is again consistently better than BIBREF4, validating that the parametric treatment is beneficial. The proportion-based data weighting provides only limited improvement, showing the advantage of adaptive data weighting. The base model trained on the joint training-validation data for fixed steps fails to perform well, partly due to the lack of a proper mechanism for selecting steps.",
"Table TABREF30 shows the results on imbalanced CIFAR10 classification. Similarly, our method outperforms other comparison approaches. In contrast, the fixed proportion-based method sometimes harms the performance as in the 50:1000 and 100:1000 settings.",
"We also tested the text augmentation LM on the SST-2 imbalanced data. Interestingly, the augmentation tends to hinder model training and yields accuracy of around 50% (random guess). This is because the augmentation LM is first fit to the imbalanced data, which makes label preservation inaccurate and introduces lots of noise during augmentation. Though a more carefully designed augmentation mechanism can potentially help with imbalanced classification (e.g., augmenting only the rare classes), the above observation further shows that the varying data manipulation schemes have different applicable scopes. Our approach is thus favorable as the single algorithm can be instantiated to learn different schemes."
],
[
"Conclusions. We have developed a new method of learning different data manipulation schemes with the same single algorithm. Different manipulation schemes reduce to just different parameterization of the data reward function. The manipulation parameters are trained jointly with the target model parameters. We instantiate the algorithm for data augmentation and weighting, and show improved performance over strong base models and previous manipulation methods. We are excited to explore more types of manipulations such as data synthesis, and in particular study the combination of different manipulation schemes.",
"The proposed method builds upon the connections between supervised learning and reinforcement learning (RL) BIBREF6 through which we extrapolate an off-the-shelf reward learning algorithm in the RL literature to the supervised setting. The way we obtained the manipulation algorithm represents a general means of innovating problem solutions based on unifying formalisms of different learning paradigms. Specifically, a unifying formalism not only offers new understandings of the seemingly distinct paradigms, but also allows us to systematically apply solutions to problems in one paradigm to similar problems in another. Previous work along this line has made fruitful results in other domains. For example, an extended formulation of BIBREF6 that connects RL and posterior regularization (PR) BIBREF44, BIBREF45 has enabled to similarly export a reward learning algorithm to the context of PR for learning structured knowledge BIBREF46. By establishing a uniform abstration of GANs BIBREF47 and VAEs BIBREF33, BIBREF48 exchange techniques between the two families and get improved generative modeling. Other work in the similar spirit includes BIBREF49, BIBREF50, BIBREF51.",
"By extrapolating algorithms between paradigms, one can go beyond crafting new algorithms from scratch as in most existing studies, which often requires deep expertise and yields unique solutions in a dedicated context. Instead, innovation becomes easier by importing rich ideas from other paradigms, and is repeatable as a new algorithm can be methodically extrapolated to multiple different contexts."
]
],
"section_name": [
"Introduction",
"Related Work",
"Background",
"Background ::: Equivalence between Data and Reward",
"Background ::: Gradient-based Reward Learning",
"Learning Data Manipulation ::: Method ::: Parameterizing Data Manipulation",
"Learning Data Manipulation ::: Method ::: Learning Manipulation Parameters",
"Learning Data Manipulation ::: Instantiations: Augmentation & Weighting",
"Learning Data Manipulation ::: Instantiations: Augmentation & Weighting ::: Fine-tuning Text Augmentation",
"Learning Data Manipulation ::: Instantiations: Augmentation & Weighting ::: Learning Data Weights",
"Experiments",
"Experiments ::: Experimental Setup",
"Experiments ::: Low Data Regime",
"Experiments ::: Low Data Regime ::: Setup",
"Experiments ::: Low Data Regime ::: Results",
"Experiments ::: Imbalanced Labels",
"Experiments ::: Imbalanced Labels ::: Setup",
"Experiments ::: Imbalanced Labels ::: Results",
"Discussions: Algorithm Extrapolation between Learning Paradigms"
]
} | {
"answers": [
{
"annotation_id": [
"44ddeed35333b2be8b839b1f72b41415ef3288e2"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 1: Accuracy of Data Manipulation on Text Classification. All results are averaged over 15 runs ± one standard deviation. The numbers in parentheses next to the dataset names indicate the size of the datasets. For example, (40+2) denotes 40 training instances and 2 validation instances per class.",
"Table TABREF26 shows the manipulation results on text classification. For data augmentation, our approach significantly improves over the base model on all the three datasets. Besides, compared to both the conventional synonym substitution and the approach that keeps the augmentation network fixed, our adaptive method that fine-tunes the augmentation network jointly with model training achieves superior results. Indeed, the heuristic-based synonym approach can sometimes harm the model performance (e.g., SST-5 and IMDB), as also observed in previous work BIBREF19, BIBREF18. This can be because the heuristic rules do not fit the task or datasets well. In contrast, learning-based augmentation has the advantage of adaptively generating useful samples to improve model training.",
"Table TABREF29 shows the classification results on SST-2 with varying imbalance ratios. We can see our data weighting performs best across all settings. In particular, the improvement over the base model increases as the data gets more imbalanced, ranging from around 6 accuracy points on 100:1000 to over 20 accuracy points on 20:1000. Our method is again consistently better than BIBREF4, validating that the parametric treatment is beneficial. The proportion-based data weighting provides only limited improvement, showing the advantage of adaptive data weighting. The base model trained on the joint training-validation data for fixed steps fails to perform well, partly due to the lack of a proper mechanism for selecting steps."
],
"extractive_spans": [],
"free_form_answer": "Low data: SST-5, TREC, IMDB around 1-2 accuracy points better than baseline\nImbalanced labels: the improvement over the base model increases as the data gets more imbalanced, ranging from around 6 accuracy points on 100:1000 to over 20 accuracy points on 20:1000",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Accuracy of Data Manipulation on Text Classification. All results are averaged over 15 runs ± one standard deviation. The numbers in parentheses next to the dataset names indicate the size of the datasets. For example, (40+2) denotes 40 training instances and 2 validation instances per class.",
"Table TABREF26 shows the manipulation results on text classification. For data augmentation, our approach significantly improves over the base model on all the three datasets.",
"Table TABREF29 shows the classification results on SST-2 with varying imbalance ratios. We can see our data weighting performs best across all settings. In particular, the improvement over the base model increases as the data gets more imbalanced, ranging from around 6 accuracy points on 100:1000 to over 20 accuracy points on 20:1000."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"16835d44ce5e50af3a3a17259558d5bfc399afec",
"d7161aab4c32a2f34a7418fbc4b64523baa68833"
],
"answer": [
{
"evidence": [
"In this work, we propose a new approach that enables learning for different manipulation schemes with the same single algorithm. Our approach draws inspiration from the recent work BIBREF6 that shows equivalence between the data in supervised learning and the reward function in reinforcement learning. We thus adapt an off-the-shelf reward learning algorithm BIBREF7 to the supervised setting for automated data manipulation. The marriage of the two paradigms results in a simple yet general algorithm, where various manipulation schemes are reduced to different parameterization of the data reward. Free parameters of manipulation are learned jointly with the target model through efficient gradient descent on validation examples. We demonstrate instantiations of the approach for automatically fine-tuning an augmentation network and learning data weights, respectively."
],
"extractive_spans": [
"BIBREF7"
],
"free_form_answer": "",
"highlighted_evidence": [
"We thus adapt an off-the-shelf reward learning algorithm BIBREF7 to the supervised setting for automated data manipulation."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In this work, we propose a new approach that enables learning for different manipulation schemes with the same single algorithm. Our approach draws inspiration from the recent work BIBREF6 that shows equivalence between the data in supervised learning and the reward function in reinforcement learning. We thus adapt an off-the-shelf reward learning algorithm BIBREF7 to the supervised setting for automated data manipulation. The marriage of the two paradigms results in a simple yet general algorithm, where various manipulation schemes are reduced to different parameterization of the data reward. Free parameters of manipulation are learned jointly with the target model through efficient gradient descent on validation examples. We demonstrate instantiations of the approach for automatically fine-tuning an augmentation network and learning data weights, respectively."
],
"extractive_spans": [
" reward learning algorithm BIBREF7"
],
"free_form_answer": "",
"highlighted_evidence": [
"We thus adapt an off-the-shelf reward learning algorithm BIBREF7 to the supervised setting for automated data manipulation."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
],
"nlp_background": [
"zero",
"zero"
],
"paper_read": [
"no",
"no"
],
"question": [
"How much is classification performance improved in experiments for low data regime and class-imbalance problems?",
"What off-the-shelf reward learning algorithm from RL for joint data manipulation learning and model training is adapted?"
],
"question_id": [
"3415762847ed13acc3c90de60e3ef42612bc49af",
"223dc2b9ea34addc0f502003c2e1c1141f6b36a7"
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"search_query": [
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: Algorithm Computation. Blue arrows denote learning model θ. Red arrows denote learning manipulation φ. Solid arrows denote forward pass. Dashed arrows denote backward pass and parameter updates.",
"Table 1: Accuracy of Data Manipulation on Text Classification. All results are averaged over 15 runs ± one standard deviation. The numbers in parentheses next to the dataset names indicate the size of the datasets. For example, (40+2) denotes 40 training instances and 2 validation instances per class.",
"Figure 2: Words predicted with the highest probabilities by the augmentation LM. Two tokens “striking” and “grey” are masked for substitution. The boxes in respective colors list the predicted words after training epoch 1 and 3, respectively. E.g., “stunning” is the most probable substitution for “striking” in epoch 1.",
"Table 3: Accuracy of Data Weighting on Imbalanced SST-2. The first row shows the number of training examples in each of the two classes.",
"Table 4: Accuracy of Data Weighting on Imbalanced CIFAR10. The first row shows the number of training examples in each of the two classes."
],
"file": [
"4-Figure1-1.png",
"7-Table1-1.png",
"7-Figure2-1.png",
"8-Table3-1.png",
"8-Table4-1.png"
]
} | [
"How much is classification performance improved in experiments for low data regime and class-imbalance problems?"
] | [
[
"1910.12795-7-Table1-1.png",
"1910.12795-Experiments ::: Low Data Regime ::: Results-0",
"1910.12795-Experiments ::: Imbalanced Labels ::: Results-0"
]
] | [
"Low data: SST-5, TREC, IMDB around 1-2 accuracy points better than baseline\nImbalanced labels: the improvement over the base model increases as the data gets more imbalanced, ranging from around 6 accuracy points on 100:1000 to over 20 accuracy points on 20:1000"
] | 235 |
1805.10824 | UG18 at SemEval-2018 Task 1: Generating Additional Training Data for Predicting Emotion Intensity in Spanish | The present study describes our submission to SemEval 2018 Task 1: Affect in Tweets. Our Spanish-only approach aimed to demonstrate that it is beneficial to automatically generate additional training data by (i) translating training data from other languages and (ii) applying a semi-supervised learning method. We find strong support for both approaches, with those models outperforming our regular models in all subtasks. However, creating a stepwise ensemble of different models as opposed to simply averaging did not result in an increase in performance. We placed second (EI-Reg), second (EI-Oc), fourth (V-Reg) and fifth (V-Oc) in the four Spanish subtasks we participated in. | {
"paragraphs": [
[
"Understanding the emotions expressed in a text or message is of high relevance nowadays. Companies are interested in this to get an understanding of the sentiment of their current customers regarding their products and the sentiment of their potential customers to attract new ones. Moreover, changes in a product or a company may also affect the sentiment of a customer. However, the intensity of an emotion is crucial in determining the urgency and importance of that sentiment. If someone is only slightly happy about a product, is a customer willing to buy it again? Conversely, if someone is very angry about customer service, his or her complaint might be given priority over somewhat milder complaints.",
" BIBREF0 present four tasks in which systems have to automatically determine the intensity of emotions (EI) or the intensity of the sentiment (Valence) of tweets in the languages English, Arabic, and Spanish. The goal is to either predict a continuous regression (reg) value or to do ordinal classification (oc) based on a number of predefined categories. The EI tasks have separate training sets for four different emotions: anger, fear, joy and sadness. Due to the large number of subtasks and the fact that this language does not have many resources readily available, we only focus on the Spanish subtasks. Our work makes the following contributions:",
"Our submissions ranked second (EI-Reg), second (EI-Oc), fourth (V-Reg) and fifth (V-Oc), demonstrating that the proposed method is accurate in automatically determining the intensity of emotions and sentiment of Spanish tweets. This paper will first focus on the datasets, the data generation procedure, and the techniques and tools used. Then we present the results in detail, after which we perform a small error analysis on the largest mistakes our model made. We conclude with some possible ideas for future work."
],
[
"For each task, the training data that was made available by the organizers is used, which is a selection of tweets with for each tweet a label describing the intensity of the emotion or sentiment BIBREF1 . Links and usernames were replaced by the general tokens URL and @username, after which the tweets were tokenized by using TweetTokenizer. All text was lowercased. In a post-processing step, it was ensured that each emoji is tokenized as a single token."
],
[
"To be able to train word embeddings, Spanish tweets were scraped between November 8, 2017 and January 12, 2018. We chose to create our own embeddings instead of using pre-trained embeddings, because this way the embeddings would resemble the provided data set: both are based on Twitter data. Added to this set was the Affect in Tweets Distant Supervision Corpus (DISC) made available by the organizers BIBREF0 and a set of 4.1 million tweets from 2015, obtained from BIBREF2 . After removing duplicate tweets and tweets with fewer than ten tokens, this resulted in a set of 58.7 million tweets, containing 1.1 billion tokens. The tweets were preprocessed using the method described in Section SECREF6 . The word embeddings were created using word2vec in the gensim library BIBREF3 , using CBOW, a window size of 40 and a minimum count of 5. The feature vectors for each tweet were then created by using the AffectiveTweets WEKA package BIBREF4 ."
],
[
"Most lexical resources for sentiment analysis are in English. To still be able to benefit from these sources, the lexicons in the AffectiveTweets package were translated to Spanish, using the machine translation platform Apertium BIBREF5 .",
"All lexicons from the AffectiveTweets package were translated, except for SentiStrength. Instead of translating this lexicon, the English version was replaced by the Spanish variant made available by BIBREF6 .",
"For each subtask, the optimal combination of lexicons was determined. This was done by first calculating the benefits of adding each lexicon individually, after which only beneficial lexicons were added until the score did not increase anymore (e.g. after adding the best four lexicons the fifth one did not help anymore, so only four were added). The tests were performed using a default SVM model, with the set of word embeddings described in the previous section. Each subtask thus uses a different set of lexicons (see Table TABREF1 for an overview of the lexicons used in our final ensemble). For each subtask, this resulted in a (modest) increase on the development set, between 0.01 and 0.05."
],
[
"The training set provided by BIBREF0 is not very large, so it was interesting to find a way to augment the training set. A possible method is to simply translate the datasets into other languages, leaving the labels intact. Since the present study focuses on Spanish tweets, all tweets from the English datasets were translated into Spanish. This new set of “Spanish” data was then added to our original training set. Again, the machine translation platform Apertium BIBREF5 was used for the translation of the datasets."
],
[
"Three types of models were used in our system, a feed-forward neural network, an LSTM network and an SVM regressor. The neural nets were inspired by the work of Prayas BIBREF7 in the previous shared task. Different regression algorithms (e.g. AdaBoost, XGBoost) were also tried due to the success of SeerNet BIBREF8 , but our study was not able to reproduce their results for Spanish.",
"For both the LSTM network and the feed-forward network, a parameter search was done for the number of layers, the number of nodes and dropout used. This was done for each subtask, i.e. different tasks can have a different number of layers. All models were implemented using Keras BIBREF9 . After the best parameter settings were found, the results of 10 system runs to produce our predictions were averaged (note that this is different from averaging our different type of models in Section SECREF16 ). For the SVM (implemented in scikit-learn BIBREF10 ), the RBF kernel was used and a parameter search was conducted for epsilon. Detailed parameter settings for each subtask are shown in Table TABREF12 . Each parameter search was performed using 10-fold cross validation, as to not overfit on the development set."
],
[
"One of the aims of this study was to see if using semi-supervised learning is beneficial for emotion intensity tasks. For this purpose, the DISC BIBREF0 corpus was used. This corpus was created by querying certain emotion-related words, which makes it very suitable as a semi-supervised corpus. However, the specific emotion the tweet belonged to was not made public. Therefore, a method was applied to automatically assign the tweets to an emotion by comparing our scraped tweets to this new data set.",
"First, in an attempt to obtain the query-terms, we selected the 100 words which occurred most frequently in the DISC corpus, in comparison with their frequencies in our own scraped tweets corpus. Words that were clearly not indicators of emotion were removed. The rest was annotated per emotion or removed if it was unclear to which emotion the word belonged. This allowed us to create silver datasets per emotion, assigning tweets to an emotion if an annotated emotion-word occurred in the tweet.",
"Our semi-supervised approach is quite straightforward: first a model is trained on the training set and then this model is used to predict the labels of the silver data. This silver data is then simply added to our training set, after which the model is retrained. However, an extra step is applied to ensure that the silver data is of reasonable quality. Instead of training a single model initially, ten different models were trained which predict the labels of the silver instances. If the highest and lowest prediction do not differ more than a certain threshold the silver instance is maintained, otherwise it is discarded.",
"This results in two parameters that could be optimized: the threshold and the number of silver instances that would be added. This method can be applied to both the LSTM and feed-forward networks that were used. An overview of the characteristics of our data set with the final parameter settings is shown in Table TABREF14 . Usually, only a small subset of data was added to our training set, meaning that most of the silver data is not used in the experiments. Note that since only the emotions were annotated, this method is only applicable to the EI tasks."
],
[
"To boost performance, the SVM, LSTM, and feed-forward models were combined into an ensemble. For both the LSTM and feed-forward approach, three different models were trained. The first model was trained on the training data (regular), the second model was trained on both the training and translated training data (translated) and the third one was trained on both the training data and the semi-supervised data (silver). Due to the nature of the SVM algorithm, semi-supervised learning does not help, so only the regular and translated model were trained in this case. This results in 8 different models per subtask. Note that for the valence tasks no silver training data was obtained, meaning that for those tasks the semi-supervised models could not be used.",
"Per task, the LSTM and feed-forward model's predictions were averaged over 10 prediction runs. Subsequently, the predictions of all individual models were combined into an average. Finally, models were removed from the ensemble in a stepwise manner if the removal increased the average score. This was done based on their original scores, i.e. starting out by trying to remove the worst individual model and working our way up to the best model. We only consider it an increase in score if the difference is larger than 0.002 (i.e. the difference between 0.716 and 0.718). If at some point the score does not increase and we are therefore unable to remove a model, the process is stopped and our best ensemble of models has been found. This process uses the scores on the development set of different combinations of models. Note that this means that the ensembles for different subtasks can contain different sets of models. The final model selections can be found in Table TABREF17 ."
],
[
"Table TABREF18 shows the results on the development set of all individuals models, distinguishing the three types of training: regular (r), translated (t) and semi-supervised (s). In Tables TABREF17 and TABREF18 , the letter behind each model (e.g. SVM-r, LSTM-r) corresponds to the type of training used. Comparing the regular and translated columns for the three algorithms, it shows that in 22 out of 30 cases, using translated instances as extra training data resulted in an improvement. For the semi-supervised learning approach, an improvement is found in 15 out of 16 cases. Moreover, our best individual model for each subtask (bolded scores in Table TABREF18 ) is always either a translated or semi-supervised model. Table TABREF18 also shows that, in general, our feed-forward network obtained the best results, having the highest F-score for 8 out of 10 subtasks.",
"However, Table TABREF19 shows that these scores can still be improved by averaging or ensembling the individual models. On the dev set, averaging our 8 individual models results in a better score for 8 out of 10 subtasks, while creating an ensemble beats all of the individual models as well as the average for each subtask. On the test set, however, only a small increase in score (if any) is found for stepwise ensembling, compared to averaging. Even though the results do not get worse, we cannot conclude that stepwise ensembling is a better method than simply averaging.",
"Our official scores (column Ens Test in Table TABREF19 ) have placed us second (EI-Reg, EI-Oc), fourth (V-Reg) and fifth (V-Oc) on the SemEval AIT-2018 leaderboard. However, it is evident that the results obtained on the test set are not always in line with those achieved on the development set. Especially on the anger subtask for both EI-Reg and EI-Oc, the scores are considerably lower on the test set in comparison with the results on the development set. Therefore, a small error analysis was performed on the instances where our final model made the largest errors."
],
[
"Due to some large differences between our results on the dev and test set of this task, we performed a small error analysis in order to see what caused these differences. For EI-Reg-anger, the gold labels were compared to our own predictions, and we manually checked 50 instances for which our system made the largest errors.",
"Some examples that were indicative of the shortcomings of our system are shown in Table TABREF20 . First of all, our system did not take into account capitalization. The implications of this are shown in the first sentence, where capitalization intensifies the emotion used in the sentence. In the second sentence, the name Imperator Furiosa is not understood. Since our texts were lowercased, our system was unable to capture the named entity and thought the sentence was about an angry emperor instead. In the third sentence, our system fails to capture that when you are so angry that it makes you laugh, it results in a reduced intensity of the angriness. Finally, in the fourth sentence, it is the figurative language me infla la vena (it inflates my vein) that the system is not able to understand.",
"The first two error-categories might be solved by including smart features regarding capitalization and named entity recognition. However, the last two categories are problems of natural language understanding and will be very difficult to fix."
],
[
"To conclude, the present study described our submission for the Semeval 2018 Shared Task on Affect in Tweets. We participated in four Spanish subtasks and our submissions ranked second, second, fourth and fifth place. Our study aimed to investigate whether the automatic generation of additional training data through translation and semi-supervised learning, as well as the creation of stepwise ensembles, increase the performance of our Spanish-language models. Strong support was found for the translation and semi-supervised learning approaches; our best models for all subtasks use either one of these approaches. These results suggest that both of these additional data resources are beneficial when determining emotion intensity (for Spanish). However, the creation of a stepwise ensemble from the best models did not result in better performance compared to simply averaging the models. In addition, some signs of overfitting on the dev set were found. In future work, we would like to apply the methods (translation and semi-supervised learning) used on Spanish on other low-resource languages and potentially also on other tasks."
]
],
"section_name": [
"Introduction",
"Data",
"Word Embeddings",
"Translating Lexicons",
"Translating Data",
"Algorithms Used",
"Semi-supervised Learning",
"Ensembling",
"Results and Discussion",
"Error Analysis",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"c70525cd6ca8b3e7ea9913bc61e016d6444b76b6"
],
"answer": [
{
"evidence": [
"Our submissions ranked second (EI-Reg), second (EI-Oc), fourth (V-Reg) and fifth (V-Oc), demonstrating that the proposed method is accurate in automatically determining the intensity of emotions and sentiment of Spanish tweets. This paper will first focus on the datasets, the data generation procedure, and the techniques and tools used. Then we present the results in detail, after which we perform a small error analysis on the largest mistakes our model made. We conclude with some possible ideas for future work."
],
"extractive_spans": [],
"free_form_answer": "Answer with content missing: (Subscript 1: \"We did not participate in subtask 5 (E-c)\") Authors participated in EI-Reg, EI-Oc, V-Reg and V-Oc subtasks.",
"highlighted_evidence": [
"Our submissions ranked second (EI-Reg), second (EI-Oc), fourth (V-Reg) and fifth (V-Oc), demonstrating that the proposed method is accurate in automatically determining the intensity of emotions and sentiment of Spanish tweets."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"d7423d91d6f84667668f608d52a539466127b417"
],
"answer": [
{
"evidence": [
"Our official scores (column Ens Test in Table TABREF19 ) have placed us second (EI-Reg, EI-Oc), fourth (V-Reg) and fifth (V-Oc) on the SemEval AIT-2018 leaderboard. However, it is evident that the results obtained on the test set are not always in line with those achieved on the development set. Especially on the anger subtask for both EI-Reg and EI-Oc, the scores are considerably lower on the test set in comparison with the results on the development set. Therefore, a small error analysis was performed on the instances where our final model made the largest errors."
],
"extractive_spans": [
"column Ens Test in Table TABREF19"
],
"free_form_answer": "",
"highlighted_evidence": [
"Our official scores (column Ens Test in Table TABREF19 ) have placed us second (EI-Reg, EI-Oc), fourth (V-Reg) and fifth (V-Oc) on the SemEval AIT-2018 leaderboard."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"c634ecfb6a15f9f6668b671302ac848e7ae2f9c9",
"e6790926e216e989bdae0561f46a5a5415e0af2f"
],
"answer": [
{
"evidence": [
"Most lexical resources for sentiment analysis are in English. To still be able to benefit from these sources, the lexicons in the AffectiveTweets package were translated to Spanish, using the machine translation platform Apertium BIBREF5 ."
],
"extractive_spans": [
"using the machine translation platform Apertium "
],
"free_form_answer": "",
"highlighted_evidence": [
"Most lexical resources for sentiment analysis are in English. To still be able to benefit from these sources, the lexicons in the AffectiveTweets package were translated to Spanish, using the machine translation platform Apertium BIBREF5 "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Most lexical resources for sentiment analysis are in English. To still be able to benefit from these sources, the lexicons in the AffectiveTweets package were translated to Spanish, using the machine translation platform Apertium BIBREF5 ."
],
"extractive_spans": [
"machine translation platform Apertium BIBREF5"
],
"free_form_answer": "",
"highlighted_evidence": [
"To still be able to benefit from these sources, the lexicons in the AffectiveTweets package were translated to Spanish, using the machine translation platform Apertium BIBREF5 ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"9900691178edaaaeefed1563e83176d0e23a1514",
"fe8a951d04fc4476611d1c850e0625590052933a"
],
"answer": [
{
"evidence": [
"For each task, the training data that was made available by the organizers is used, which is a selection of tweets with for each tweet a label describing the intensity of the emotion or sentiment BIBREF1 . Links and usernames were replaced by the general tokens URL and @username, after which the tweets were tokenized by using TweetTokenizer. All text was lowercased. In a post-processing step, it was ensured that each emoji is tokenized as a single token.",
"The training set provided by BIBREF0 is not very large, so it was interesting to find a way to augment the training set. A possible method is to simply translate the datasets into other languages, leaving the labels intact. Since the present study focuses on Spanish tweets, all tweets from the English datasets were translated into Spanish. This new set of “Spanish” data was then added to our original training set. Again, the machine translation platform Apertium BIBREF5 was used for the translation of the datasets."
],
"extractive_spans": [],
"free_form_answer": " Selection of tweets with for each tweet a label describing the intensity of the emotion or sentiment provided by organizers and tweets translated form English to Spanish.",
"highlighted_evidence": [
"For each task, the training data that was made available by the organizers is used, which is a selection of tweets with for each tweet a label describing the intensity of the emotion or sentiment BIBREF1 . ",
"Since the present study focuses on Spanish tweets, all tweets from the English datasets were translated into Spanish. This new set of “Spanish” data was then added to our original training set. Again, the machine translation platform Apertium BIBREF5 was used for the translation of the datasets."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"For each task, the training data that was made available by the organizers is used, which is a selection of tweets with for each tweet a label describing the intensity of the emotion or sentiment BIBREF1 . Links and usernames were replaced by the general tokens URL and @username, after which the tweets were tokenized by using TweetTokenizer. All text was lowercased. In a post-processing step, it was ensured that each emoji is tokenized as a single token.",
"To be able to train word embeddings, Spanish tweets were scraped between November 8, 2017 and January 12, 2018. We chose to create our own embeddings instead of using pre-trained embeddings, because this way the embeddings would resemble the provided data set: both are based on Twitter data. Added to this set was the Affect in Tweets Distant Supervision Corpus (DISC) made available by the organizers BIBREF0 and a set of 4.1 million tweets from 2015, obtained from BIBREF2 . After removing duplicate tweets and tweets with fewer than ten tokens, this resulted in a set of 58.7 million tweets, containing 1.1 billion tokens. The tweets were preprocessed using the method described in Section SECREF6 . The word embeddings were created using word2vec in the gensim library BIBREF3 , using CBOW, a window size of 40 and a minimum count of 5. The feature vectors for each tweet were then created by using the AffectiveTweets WEKA package BIBREF4 ."
],
"extractive_spans": [
"Spanish tweets were scraped between November 8, 2017 and January 12, 2018",
"Affect in Tweets Distant Supervision Corpus (DISC)"
],
"free_form_answer": "",
"highlighted_evidence": [
"For each task, the training data that was made available by the organizers is used, which is a selection of tweets with for each tweet a label describing the intensity of the emotion or sentiment BIBREF1 .",
"To be able to train word embeddings, Spanish tweets were scraped between November 8, 2017 and January 12, 2018.",
"Added to this set was the Affect in Tweets Distant Supervision Corpus (DISC) made available by the organizers BIBREF0 and a set of 4.1 million tweets from 2015, obtained from BIBREF2 ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"860010736dc61f7078cbc0d327d7fb7a4ba6fc61",
"882693c83b0309cdef86bd9d668d339ddd9598e0"
],
"answer": [
{
"evidence": [
"The training set provided by BIBREF0 is not very large, so it was interesting to find a way to augment the training set. A possible method is to simply translate the datasets into other languages, leaving the labels intact. Since the present study focuses on Spanish tweets, all tweets from the English datasets were translated into Spanish. This new set of “Spanish” data was then added to our original training set. Again, the machine translation platform Apertium BIBREF5 was used for the translation of the datasets."
],
"extractive_spans": [
"English "
],
"free_form_answer": "",
"highlighted_evidence": [
"Since the present study focuses on Spanish tweets, all tweets from the English datasets were translated into Spanish. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Most lexical resources for sentiment analysis are in English. To still be able to benefit from these sources, the lexicons in the AffectiveTweets package were translated to Spanish, using the machine translation platform Apertium BIBREF5 ."
],
"extractive_spans": [
"English"
],
"free_form_answer": "",
"highlighted_evidence": [
"Most lexical resources for sentiment analysis are in English. To still be able to benefit from these sources, the lexicons in the AffectiveTweets package were translated to Spanish, using the machine translation platform Apertium BIBREF5 ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"17e56555909b07aff55a65340f6336c87b234d0d"
],
"answer": [
{
"evidence": [
"Our semi-supervised approach is quite straightforward: first a model is trained on the training set and then this model is used to predict the labels of the silver data. This silver data is then simply added to our training set, after which the model is retrained. However, an extra step is applied to ensure that the silver data is of reasonable quality. Instead of training a single model initially, ten different models were trained which predict the labels of the silver instances. If the highest and lowest prediction do not differ more than a certain threshold the silver instance is maintained, otherwise it is discarded."
],
"extractive_spans": [
"first a model is trained on the training set and then this model is used to predict the labels of the silver data",
"This silver data is then simply added to our training set, after which the model is retrained"
],
"free_form_answer": "",
"highlighted_evidence": [
"Our semi-supervised approach is quite straightforward: first a model is trained on the training set and then this model is used to predict the labels of the silver data. This silver data is then simply added to our training set, after which the model is retrained. However, an extra step is applied to ensure that the silver data is of reasonable quality."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"",
"",
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
"",
"",
""
],
"question": [
"What subtasks did they participate in?",
"What were the scores of their system?",
"How was the training data translated?",
"What dataset did they use?",
"What other languages did they translate the data from?",
"What semi-supervised learning is applied?"
],
"question_id": [
"e1ab11885f72b4658263a60751d956ba661c1d61",
"c85b6f9bafc4c64fc538108ab40a0590a2f5768e",
"8e52637026bee9061f9558178eaec08279bf7ac6",
"0f6216b9e4e59252b0c1adfd1a848635437dfcdc",
"22ccee453e37536ddb0c1c1d17b0dbac04c6c607",
"d00bbeda2a45495e6261548710afa6b21ea32870"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
"",
"",
""
]
} | {
"caption": [
"Table 1: Lexicons included in our final ensemble. NRC-10 and SentiWordNet are left out of the table because they never improved the score for a task.",
"Table 2: Parameter settings for the algorithms used. For feed-forward, we show the number of nodes per layer. The Dense column for LSTM shows whether a dense layer was added after the LSTM layers (with half the number of nodes as is shown in the Nodes column). The feed-forward networks always use a dropout of 0.001 after the first layer.",
"Table 3: Statistics and parameter settings of the semi-supervised learning experiments.",
"Table 4: Models included in our final ensemble.",
"Table 5: Scores for each individual model per subtask. Best individual score per subtask is bolded.",
"Table 6: Results on the dev and test set for averaging and stepwise ensembling the individual models. The last column shows our official results.",
"Table 7: Error analysis for the EI-Reg-anger subtask, with English translations."
],
"file": [
"2-Table1-1.png",
"4-Table2-1.png",
"4-Table3-1.png",
"5-Table4-1.png",
"5-Table5-1.png",
"5-Table6-1.png",
"6-Table7-1.png"
]
} | [
"What subtasks did they participate in?",
"What dataset did they use?"
] | [
[
"1805.10824-Introduction-2"
],
[
"1805.10824-Translating Data-0",
"1805.10824-Data-0",
"1805.10824-Word Embeddings-0"
]
] | [
"Answer with content missing: (Subscript 1: \"We did not participate in subtask 5 (E-c)\") Authors participated in EI-Reg, EI-Oc, V-Reg and V-Oc subtasks.",
" Selection of tweets with for each tweet a label describing the intensity of the emotion or sentiment provided by organizers and tweets translated form English to Spanish."
] | 236 |
2003.04866 | Multi-SimLex: A Large-Scale Evaluation of Multilingual and Cross-Lingual Lexical Semantic Similarity | We introduce Multi-SimLex, a large-scale lexical resource and evaluation benchmark covering datasets for 12 typologically diverse languages, including major languages (e.g., Mandarin Chinese, Spanish, Russian) as well as less-resourced ones (e.g., Welsh, Kiswahili). Each language dataset is annotated for the lexical relation of semantic similarity and contains 1,888 semantically aligned concept pairs, providing a representative coverage of word classes (nouns, verbs, adjectives, adverbs), frequency ranks, similarity intervals, lexical fields, and concreteness levels. Additionally, owing to the alignment of concepts across languages, we provide a suite of 66 cross-lingual semantic similarity datasets. Due to its extensive size and language coverage, Multi-SimLex provides entirely novel opportunities for experimental evaluation and analysis. On its monolingual and cross-lingual benchmarks, we evaluate and analyze a wide array of recent state-of-the-art monolingual and cross-lingual representation models, including static and contextualized word embeddings (such as fastText, M-BERT and XLM), externally informed lexical representations, as well as fully unsupervised and (weakly) supervised cross-lingual word embeddings. We also present a step-by-step dataset creation protocol for creating consistent, Multi-Simlex-style resources for additional languages. We make these contributions -- the public release of Multi-SimLex datasets, their creation protocol, strong baseline results, and in-depth analyses which can be be helpful in guiding future developments in multilingual lexical semantics and representation learning -- available via a website which will encourage community effort in further expansion of Multi-Simlex to many more languages. Such a large-scale semantic resource could inspire significant further advances in NLP across languages. | {
"paragraphs": [
[
"The lack of annotated training and evaluation data for many tasks and domains hinders the development of computational models for the majority of the world's languages BIBREF0, BIBREF1, BIBREF2. The necessity to guide and advance multilingual and cross-lingual NLP through annotation efforts that follow cross-lingually consistent guidelines has been recently recognized by collaborative initiatives such as the Universal Dependency (UD) project BIBREF3. The latest version of UD (as of March 2020) covers more than 70 languages. Crucially, this resource continues to steadily grow and evolve through the contributions of annotators from across the world, extending the UD's reach to a wide array of typologically diverse languages. Besides steering research in multilingual parsing BIBREF4, BIBREF5, BIBREF6 and cross-lingual parser transfer BIBREF7, BIBREF8, BIBREF9, the consistent annotations and guidelines have also enabled a range of insightful comparative studies focused on the languages' syntactic (dis)similarities BIBREF10, BIBREF11, BIBREF12.",
"Inspired by the UD work and its substantial impact on research in (multilingual) syntax, in this article we introduce Multi-SimLex, a suite of manually and consistently annotated semantic datasets for 12 different languages, focused on the fundamental lexical relation of semantic similarity BIBREF13, BIBREF14. For any pair of words, this relation measures whether their referents share the same (functional) features, as opposed to general cognitive association captured by co-occurrence patterns in texts (i.e., the distributional information). Datasets that quantify the strength of true semantic similarity between concept pairs such as SimLex-999 BIBREF14 or SimVerb-3500 BIBREF15 have been instrumental in improving models for distributional semantics and representation learning. Discerning between semantic similarity and relatedness/association is not only crucial for theoretical studies on lexical semantics (see §SECREF2), but has also been shown to benefit a range of language understanding tasks in NLP. Examples include dialog state tracking BIBREF16, BIBREF17, spoken language understanding BIBREF18, BIBREF19, text simplification BIBREF20, BIBREF21, BIBREF22, dictionary and thesaurus construction BIBREF23, BIBREF24.",
"Despite the proven usefulness of semantic similarity datasets, they are available only for a small and typologically narrow sample of resource-rich languages such as German, Italian, and Russian BIBREF25, whereas some language types and low-resource languages typically lack similar evaluation data. Even if some resources do exist, they are limited in their size (e.g., 500 pairs in Turkish BIBREF26, 500 in Farsi BIBREF27, or 300 in Finnish BIBREF28) and coverage (e.g., all datasets which originated from the original English SimLex-999 contain only high-frequent concepts, and are dominated by nouns). This is why, as our departure point, we introduce a larger and more comprehensive English word similarity dataset spanning 1,888 concept pairs (see §SECREF4).",
"Most importantly, semantic similarity datasets in different languages have been created using heterogeneous construction procedures with different guidelines for translation and annotation, as well as different rating scales. For instance, some datasets were obtained by directly translating the English SimLex-999 in its entirety BIBREF25, BIBREF16 or in part BIBREF28. Other datasets were created from scratch BIBREF26 and yet others sampled English concept pairs differently from SimLex-999 and then translated and reannotated them in target languages BIBREF27. This heterogeneity makes these datasets incomparable and precludes systematic cross-linguistic analyses. In this article, consolidating the lessons learned from previous dataset construction paradigms, we propose a carefully designed translation and annotation protocol for developing monolingual Multi-SimLex datasets with aligned concept pairs for typologically diverse languages. We apply this protocol to a set of 12 languages, including a mixture of major languages (e.g., Mandarin, Russian, and French) as well as several low-resource ones (e.g., Kiswahili, Welsh, and Yue Chinese). We demonstrate that our proposed dataset creation procedure yields data with high inter-annotator agreement rates (e.g., the average mean inter-annotator agreement for Welsh is 0.742).",
"The unified construction protocol and alignment between concept pairs enables a series of quantitative analyses. Preliminary studies on the influence that polysemy and cross-lingual variation in lexical categories (see §SECREF6) have on similarity judgments are provided in §SECREF5. Data created according to Multi-SimLex protocol also allow for probing into whether similarity judgments are universal across languages, or rather depend on linguistic affinity (in terms of linguistic features, phylogeny, and geographical location). We investigate this question in §SECREF25. Naturally, Multi-SimLex datasets can be used as an intrinsic evaluation benchmark to assess the quality of lexical representations based on monolingual, joint multilingual, and transfer learning paradigms. We conduct a systematic evaluation of several state-of-the-art representation models in §SECREF7, showing that there are large gaps between human and system performance in all languages. The proposed construction paradigm also supports the automatic creation of 66 cross-lingual Multi-SimLex datasets by interleaving the monolingual ones. We outline the construction of the cross-lingual datasets in §SECREF6, and then present a quantitative evaluation of a series of cutting-edge cross-lingual representation models on this benchmark in §SECREF8.",
"",
"Contributions. We now summarize the main contributions of this work:",
"",
"1) Building on lessons learned from prior work, we create a more comprehensive lexical semantic similarity dataset for the English language spanning a total of 1,888 concept pairs balanced with respect to similarity, frequency, and concreteness, and covering four word classes: nouns, verbs, adjectives and, for the first time, adverbs. This dataset serves as the main source for the creation of equivalent datasets in several other languages.",
"",
"2) We present a carefully designed and rigorous language-agnostic translation and annotation protocol. These well-defined guidelines will facilitate the development of future Multi-SimLex datasets for other languages. The proposed protocol eliminates some crucial issues with prior efforts focused on the creation of multi-lingual semantic resources, namely: i) limited coverage; ii) heterogeneous annotation guidelines; and iii) concept pairs which are semantically incomparable across different languages.",
"",
"3) We offer to the community manually annotated evaluation sets of 1,888 concept pairs across 12 typologically diverse languages, and 66 large cross-lingual evaluation sets. To the best of our knowledge, Multi-SimLex is the most comprehensive evaluation resource to date focused on the relation of semantic similarity.",
"",
"4) We benchmark a wide array of recent state-of-the-art monolingual and cross-lingual word representation models across our sample of languages. The results can serve as strong baselines that lay the foundation for future improvements.",
"",
"5) We present a first large-scale evaluation study on the ability of encoders pretrained on language modeling (such as bert BIBREF29 and xlm BIBREF30) to reason over word-level semantic similarity in different languages. To our own surprise, the results show that monolingual pretrained encoders, even when presented with word types out of context, are sometimes competitive with static word embedding models such as fastText BIBREF31 or word2vec BIBREF32. The results also reveal a huge gap in performance between massively multilingual pretrained encoders and language-specific encoders in favor of the latter: our findings support other recent empirical evidence related to the “curse of multilinguality” BIBREF33, BIBREF34 in representation learning.",
"",
"6) We make all of these resources available on a website which facilitates easy creation, submission and sharing of Multi-Simlex-style datasets for a larger number of languages. We hope that this will yield an even larger repository of semantic resources that inspire future advances in NLP within and across languages.",
"In light of the success of Universal Dependencies BIBREF3, we hope that our initiative will instigate a collaborative public effort with established and clear-cut guidelines that will result in additional Multi-SimLex datasets in a large number of languages in the near future. Moreover, we hope that it will provide means to advance our understanding of distributional and lexical semantics across a large number of languages. All monolingual and cross-lingual Multi-SimLex datasets–along with detailed translation and annotation guidelines–are available online at: https://multisimlex.com/."
],
[
"The focus of the Multi-SimLex initiative is on the lexical relation of pure semantic similarity. For any pair of words, this relation measures whether their referents share the same features. For instance, graffiti and frescos are similar to the extent that they are both forms of painting and appear on walls. This relation can be contrasted with the cognitive association between two words, which often depends on how much their referents interact in the real world, or are found in the same situations. For instance, a painter is easily associated with frescos, although they lack any physical commonalities. Association is also known in the literature under other names: relatedness BIBREF13, topical similarity BIBREF35, and domain similarity BIBREF36.",
"Semantic similarity and association overlap to some degree, but do not coincide BIBREF37, BIBREF38. In fact, there exist plenty of pairs that are intuitively associated but not similar. Pairs where the converse is true can also be encountered, although more rarely. An example are synonyms where a word is common and the other infrequent, such as to seize and to commandeer. BIBREF14 revealed that while similarity measures based on the WordNet graph BIBREF39 and human judgments of association in the University of South Florida Free Association Database BIBREF40 do correlate, a number of pairs follow opposite trends. Several studies on human cognition also point in the same direction. For instance, semantic priming can be triggered by similar words without association BIBREF41. On the other hand, a connection with cue words is established more quickly for topically related words rather than for similar words in free association tasks BIBREF42.",
"A key property of semantic similarity is its gradience: pairs of words can be similar to a different degree. On the other hand, the relation of synonymy is binary: pairs of words are synonyms if they can be substituted in all contexts (or most contexts, in a looser sense), otherwise they are not. While synonyms can be conceived as lying on one extreme of the semantic similarity continuum, it is crucial to note that their definition is stated in purely relational terms, rather than invoking their referential properties BIBREF43, BIBREF44, BIBREF45. This makes behavioral studies on semantic similarity fundamentally different from lexical resources like WordNet BIBREF46, which include paradigmatic relations (such as synonymy)."
],
[
"The ramifications of the distinction between similarity and association are profound for distributional semantics. This paradigm of lexical semantics is grounded in the distributional hypothesis, formulated by BIBREF47 and BIBREF48. According to this hypothesis, the meaning of a word can be recovered empirically from the contexts in which it occurs within a collection of texts. Since both pairs of topically related words and pairs of purely similar words tend to appear in the same contexts, their associated meaning confounds the two distinct relations BIBREF14, BIBREF49, BIBREF50. As a result, distributional methods obscure a crucial facet of lexical meaning. This limitation also reflects onto word embeddings (WEs), representations of words as low-dimensional vectors that have become indispensable for a wide range of NLP applications BIBREF51, BIBREF52, BIBREF53. In particular, it involves both static WEs learned from co-occurrence patterns BIBREF32, BIBREF54, BIBREF31 and contextualized WEs learned from modeling word sequences BIBREF55, BIBREF29. As a result, in the induced representations, geometrical closeness (measured e.g. through cosine distance) conflates genuine similarity with broad relatedness. For instance, the vectors for antonyms such as sober and drunk, by definition dissimilar, might be neighbors in the semantic space under the distributional hypothesis. BIBREF36, BIBREF56, and BIBREF53 demonstrated that different choices of hyper-parameters in WE algorithms (such as context window) emphasize different relations in the resulting representations. Likewise, BIBREF57 and BIBREF54 discovered that WEs learned from texts annotated with syntactic information mirror similarity better than simple local bag-of-words neighborhoods.",
"The failure of WEs to capture semantic similarity, in turn, affects model performance in several NLP applications where such knowledge is crucial. In particular, Natural Language Understanding tasks such as statistical dialog modeling, text simplification, or semantic text similarity BIBREF58, BIBREF18, BIBREF59, among others, suffer the most. As a consequence, resources providing clean information on semantic similarity are key in mitigating the side effects of the distributional signal. In particular, such databases can be employed for the intrinsic evaluations of specific WE models as a proxy of their reliability for downstream applications BIBREF60, BIBREF61, BIBREF14; intuitively, the more WEs are misaligned with human judgments of similarity, the more their performance on actual tasks is expected to be degraded. Moreover, word representations can be specialized (a.k.a. retrofitted) by disentangling word relations of similarity and association. In particular, linguistic constraints sourced from external databases (such as synonyms from WordNet) can be injected into WEs BIBREF62, BIBREF63, BIBREF16, BIBREF22, BIBREF64 in order to enforce a particular relation in a distributional semantic space while preserving the original adjacency properties."
],
[
"In this work, we tackle the concept of (true) semantic similarity from a multilingual perspective. While the same meaning representations may be shared by all human speakers at a deep cognitive level, there is no one-to-one mapping between the words in the lexicons of different languages. This makes the comparison of similarity judgments across languages difficult, since the meaning overlap of translationally equivalent words is sometimes far less than exact. This results from the fact that the way languages `partition' semantic fields is partially arbitrary BIBREF65, although constrained cross-lingually by common cognitive biases BIBREF66. For instance, consider the field of colors: English distinguishes between green and blue, whereas Murle (South Sudan) has a single word for both BIBREF67.",
"In general, semantic typology studies the variation in lexical semantics across the world's languages. According to BIBREF68, the ways languages categorize concepts into the lexicon follow three main axes: 1) granularity: what is the number of categories in a specific domain?; 2) boundary location: where do the lines marking different categories lie?; 3) grouping and dissection: what are the membership criteria of a category; which instances are considered to be more prototypical? Different choices with respect to these axes lead to different lexicalization patterns. For instance, distinct senses in a polysemous word in English, such as skin (referring to both the body and fruit), may be assigned separate words in other languages such as Italian pelle and buccia, respectively BIBREF70. We later analyze whether similarity scores obtained from native speakers also loosely follow the patterns described by semantic typology."
],
[
"Word Pair Datasets. Rich expert-created resources such as WordNet BIBREF46, BIBREF71, VerbNet BIBREF72, BIBREF73, or FrameNet BIBREF74 encode a wealth of semantic and syntactic information, but are expensive and time-consuming to create. The scale of this problem gets multiplied by the number of languages in consideration. Therefore, crowd-sourcing with non-expert annotators has been adopted as a quicker alternative to produce smaller and more focused semantic resources and evaluation benchmarks. This alternative practice has had a profound impact on distributional semantics and representation learning BIBREF14. While some prominent English word pair datasets such as WordSim-353 BIBREF75, MEN BIBREF76, or Stanford Rare Words BIBREF77 did not discriminate between similarity and relatedness, the importance of this distinction was established by BIBREF14 through the creation of SimLex-999. This inspired other similar datasets which focused on different lexical properties. For instance, SimVerb-3500 BIBREF15 provided similarity ratings for 3,500 English verbs, whereas CARD-660 BIBREF78 aimed at measuring the semantic similarity of infrequent concepts.",
"",
"Semantic Similarity Datasets in Other Languages. Motivated by the impact of datasets such as SimLex-999 and SimVerb-3500 on representation learning in English, a line of related work focused on creating similar resources in other languages. The dominant approach is translating and reannotating the entire original English SimLex-999 dataset, as done previously for German, Italian, and Russian BIBREF25, Hebrew and Croatian BIBREF16, and Polish BIBREF79. Venekoski:2017nodalida apply this process only to a subset of 300 concept pairs from the English SimLex-999. On the other hand, BIBREF27 sampled a new set of 500 English concept pairs to ensure wider topical coverage and balance across similarity spectra, and then translated those pairs to German, Italian, Spanish, and Farsi (SEMEVAL-500). A similar approach was followed by BIBREF26 for Turkish, by BIBREF80 for Mandarin Chinese, and by BIBREF81 for Japanese. BIBREF82 translated the concatenation of SimLex-999, WordSim-353, and the English SEMEVAL-500 into Thai and then reannotated it. Finally, BIBREF83 translated English SimLex-999 and WordSim-353 to 11 resource-rich target languages (German, French, Russian, Italian, Dutch, Chinese, Portuguese, Swedish, Spanish, Arabic, Farsi), but they did not provide details concerning the translation process and the resolution of translation disagreements. More importantly, they also did not reannotate the translated pairs in the target languages. As we discussed in § SECREF6 and reiterate later in §SECREF5, semantic differences among languages can have a profound impact on the annotation scores; particulary, we show in §SECREF25 that these differences even roughly define language clusters based on language affinity.",
"A core issue with the current datasets concerns a lack of one unified procedure that ensures the comparability of resources in different languages. Further, concept pairs for different languages are sourced from different corpora (e.g., direct translation of the English data versus sampling from scratch in the target language). Moreover, the previous SimLex-based multilingual datasets inherit the main deficiencies of the English original version, such as the focus on nouns and highly frequent concepts. Finally, prior work mostly focused on languages that are widely spoken and do not account for the variety of the world's languages. Our long-term goal is devising a standardized methodology to extend the coverage also to languages that are resource-lean and/or typologically diverse (e.g., Welsh, Kiswahili as in this work).",
"",
"Multilingual Datasets for Natural Language Understanding. The Multi-SimLex initiative and corresponding datasets are also aligned with the recent efforts on procuring multilingual benchmarks that can help advance computational modeling of natural language understanding across different languages. For instance, pretrained multilingual language models such as multilingual bert BIBREF29 or xlm BIBREF30 are typically probed on XNLI test data BIBREF84 for cross-lingual natural language inference. XNLI was created by translating examples from the English MultiNLI dataset, and projecting its sentence labels BIBREF85. Other recent multilingual datasets target the task of question answering based on reading comprehension: i) MLQA BIBREF86 includes 7 languages ii) XQuAD BIBREF87 10 languages; iii) TyDiQA BIBREF88 9 widely spoken typologically diverse languages. While MLQA and XQuAD result from the translation from an English dataset, TyDiQA was built independently in each language. Another multilingual dataset, PAWS-X BIBREF89, focused on the paraphrase identification task and was created translating the original English PAWS BIBREF90 into 6 languages. We believe that Multi-SimLex can substantially contribute to this endeavor by offering a comprehensive multilingual benchmark for the fundamental lexical level relation of semantic similarity. In future work, Multi-SimLex also offers an opportunity to investigate the correlations between word-level semantic similarity and performance in downstream tasks such as QA and NLI across different languages."
],
[
"In this section, we discuss the design principles behind the English (eng) Multi-SimLex dataset, which is the basis for all the Multi-SimLex datasets in other languages, as detailed in §SECREF5. We first argue that a new, more balanced, and more comprehensive evaluation resource for lexical semantic similarity in English is necessary. We then describe how the 1,888 word pairs contained in the eng Multi-SimLex were selected in such a way as to represent various linguistic phenomena within a single integrated resource.",
"",
"Construction Criteria. The following criteria have to be satisfied by any high-quality semantic evaluation resource, as argued by previous studies focused on the creation of such resources BIBREF14, BIBREF15, BIBREF91, BIBREF27:",
"",
"(C1) Representative and diverse. The resource must cover the full range of diverse concepts occurring in natural language, including different word classes (e.g., nouns, verbs, adjectives, adverbs), concrete and abstract concepts, a variety of lexical fields, and different frequency ranges.",
"",
"(C2) Clearly defined. The resource must provide a clear understanding of which semantic relation exactly is annotated and measured, possibly contrasting it with other relations. For instance, the original SimLex-999 and SimVerb-3500 explicitly focus on true semantic similarity and distinguish it from broader relatedness captured by datasets such as MEN BIBREF76 or WordSim-353 BIBREF75.",
"",
"(C3) Consistent and reliable. The resource must ensure consistent annotations obtained from non-expert native speakers following simple and precise annotation guidelines.",
"In choosing the word pairs and constructing eng Multi-SimLex, we adhere to these requirements. Moreover, we follow good practices established by the research on related resources. In particular, since the introduction of the original SimLex-999 dataset BIBREF14, follow-up works have improved its construction protocol across several aspects, including: 1) coverage of more lexical fields, e.g., by relying on a diverse set of Wikipedia categories BIBREF27, 2) infrequent/rare words BIBREF78, 3) focus on particular word classes, e.g., verbs BIBREF15, 4) annotation quality control BIBREF78. Our goal is to make use of these improvements towards a larger, more representative, and more reliable lexical similarity dataset in English and, consequently, in all other languages.",
"",
"The Final Output: English Multi-SimLex. In order to ensure that the criterion C1 is satisfied, we consolidate and integrate the data already carefully sampled in prior work into a single, comprehensive, and representative dataset. This way, we can control for diversity, frequency, and other properties while avoiding to perform this time-consuming selection process from scratch. Note that, on the other hand, the word pairs chosen for English are scored from scratch as part of the entire Multi-SimLex annotation process, introduced later in §SECREF5. We now describe the external data sources for the final set of word pairs:",
"",
"1) Source: SimLex-999. BIBREF14. The English Multi-SimLex has been initially conceived as an extension of the original SimLex-999 dataset. Therefore, we include all 999 word pairs from SimLex, which span 666 noun pairs, 222 verb pairs, and 111 adjective pairs. While SimLex-999 already provides examples representing different POS classes, it does not have a sufficient coverage of different linguistic phenomena: for instance, it contains only very frequent concepts, and it does not provide a representative set of verbs BIBREF15.",
"",
"2) Source: SemEval-17: Task 2 BIBREF92. We start from the full dataset of 500 concept pairs to extract a total of 334 concept pairs for English Multi-SimLex a) which contain only single-word concepts, b) which are not named entities, c) where POS tags of the two concepts are the same, d) where both concepts occur in the top 250K most frequent word types in the English Wikipedia, and e) do not already occur in SimLex-999. The original concepts were sampled as to span all the 34 domains available as part of BabelDomains BIBREF93, which roughly correspond to the main high-level Wikipedia categories. This ensures topical diversity in our sub-sample.",
"",
"3) Source: CARD-660 BIBREF78. 67 word pairs are taken from this dataset focused on rare word similarity, applying the same selection criteria a) to e) employed for SEMEVAL-500. Words are controlled for frequency based on their occurrence counts from the Google News data and the ukWaC corpus BIBREF94. CARD-660 contains some words that are very rare (logboat), domain-specific (erythroleukemia) and slang (2mrw), which might be difficult to translate and annotate across a wide array of languages. Hence, we opt for retaining only the concept pairs above the threshold of top 250K most frequent Wikipedia concepts, as above.",
"",
"4) Source: SimVerb-3500 BIBREF15 Since both CARD-660 and SEMEVAL-500 are heavily skewed towards noun pairs, and nouns also dominate the original SimLex-999, we also extract additional verb pairs from the verb-specific similarity dataset SimVerb-3500. We randomly sample 244 verb pairs from SimVerb-3500 that represent all similarity spectra. In particular, we add 61 verb pairs for each of the similarity intervals: $[0,1.5), [1.5,3), [3,4.5), [4.5, 6]$. Since verbs in SimVerb-3500 were originally chosen from VerbNet BIBREF95, BIBREF73, they cover a wide range of verb classes and their related linguistic phenomena.",
"",
"5) Source: University of South Florida BIBREF96 norms, the largest database of free association for English. In order to improve the representation of different POS classes, we sample additional adjectives and adverbs from the USF norms following the procedure established by BIBREF14, BIBREF15. This yields additional 122 adjective pairs, but only a limited number of adverb pairs (e.g., later – never, now – here, once – twice). Therefore, we also create a set of adverb pairs semi-automatically by sampling adjectives that can be derivationally transformed into adverbs (e.g. adding the suffix -ly) from the USF, and assessing the correctness of such derivation in WordNet. The resulting pairs include, for instance, primarily – mainly, softly – firmly, roughly – reliably, etc. We include a total of 123 adverb pairs into the final English Multi-SimLex. Note that this is the first time adverbs are included into any semantic similarity dataset.",
"",
"Fulfillment of Construction Criteria. The final eng Multi-SimLex dataset spans 1,051 noun pairs, 469 verb pairs, 245 adjective pairs, and 123 adverb pairs. As mentioned above, the criterion C1 has been fulfilled by relying only on word pairs that already underwent meticulous sampling processes in prior work, integrating them into a single resource. As a consequence, Multi-SimLex allows for fine-grained analyses over different POS classes, concreteness levels, similarity spectra, frequency intervals, relation types, morphology, lexical fields, and it also includes some challenging orthographically similar examples (e.g., infection – inflection). We ensure that the criteria C2 and C3 are satisfied by using similar annotation guidelines as Simlex-999, SimVerb-3500, and SEMEVAL-500 that explicitly target semantic similarity. In what follows, we outline the carefully tailored process of translating and annotating Multi-SimLex datasets in all target languages."
],
[
"We now detail the development of the final Multi-SimLex resource, describing our language selection process, as well as translation and annotation of the resource, including the steps taken to ensure and measure the quality of this resource. We also provide key data statistics and preliminary cross-lingual comparative analyses.",
"",
"Language Selection. Multi-SimLex comprises eleven languages in addition to English. The main objective for our inclusion criteria has been to balance language prominence (by number of speakers of the language) for maximum impact of the resource, while simultaneously having a diverse suite of languages based on their typological features (such as morphological type and language family). Table TABREF10 summarizes key information about the languages currently included in Multi-SimLex. We have included a mixture of fusional, agglutinative, isolating, and introflexive languages that come from eight different language families. This includes languages that are very widely used such as Chinese Mandarin and Spanish, and low-resource languages such as Welsh and Kiswahili. We hope to further include additional languages and inspire other researchers to contribute to the effort over the lifetime of this project.",
"The work on data collection can be divided into two crucial phases: 1) a translation phase where the extended English language dataset with 1,888 pairs (described in §SECREF4) is translated into eleven target languages, and 2) an annotation phase where human raters scored each pair in the translated set as well as the English set. Detailed guidelines for both phases are available online at: https://multisimlex.com."
],
[
"Translators for each target language were instructed to find direct or approximate translations for the 1,888 word pairs that satisfy the following rules. (1) All pairs in the translated set must be unique (i.e., no duplicate pairs); (2) Translating two words from the same English pair into the same word in the target language is not allowed (e.g., it is not allowed to translate car and automobile to the same Spanish word coche). (3) The translated pairs must preserve the semantic relations between the two words when possible. This means that, when multiple translations are possible, the translation that best conveys the semantic relation between the two words found in the original English pair is selected. (4) If it is not possible to use a single-word translation in the target language, then a multi-word expression (MWE) can be used to convey the nearest possible semantics given the above points (e.g., the English word homework is translated into the Polish MWE praca domowa).",
"Satisfying the above rules when finding appropriate translations for each pair–while keeping to the spirit of the intended semantic relation in the English version–is not always straightforward. For instance, kinship terminology in Sinitic languages (Mandarin and Yue) uses different terms depending on whether the family member is older or younger, and whether the family member comes from the mother’s side or the father’s side. UTF8gbsn In Mandarin, brother has no direct translation and can be translated as either: 哥哥 (older brother) or 弟弟 (younger brother). Therefore, in such cases, the translators are asked to choose the best option given the semantic context (relation) expressed by the pair in English, otherwise select one of the translations arbitrarily. This is also used to remove duplicate pairs in the translated set, by differentiating the duplicates using a variant at each instance. Further, many translation instances were resolved using near-synonymous terms in the translation. For example, the words in the pair: wood – timber can only be directly translated in Estonian to puit, and are not distinguishable. Therefore, the translators approximated the translation for timber to the compound noun puitmaterjal (literally: wood material) in order to produce a valid pair in the target language. In some cases, a direct transliteration from English is used. For example, the pair: physician and doctor both translate to the same word in Estonian (arst); the less formal word doktor is used as a translation of doctor to generate a valid pair.",
"We measure the quality of the translated pairs by using a random sample set of 100 pairs (from the 1,888 pairs) to be translated by an independent translator for each target language. The sample is proportionally stratified according to the part-of-speech categories. The independent translator is given identical instructions to the main translator; we then measure the percentage of matched translated words between the two translations of the sample set. Table TABREF12 summarizes the inter-translator agreement results for all languages and by part-of-speech subsets. Overall across all languages, the agreement is 84.8%, which is similar to prior work BIBREF27, BIBREF97."
],
[
"Across all languages, 145 human annotators were asked to score all 1,888 pairs (in their given language). We finally collect at least ten valid annotations for each word pair in each language. All annotators were required to abide by the following instructions:",
"",
"1. Each annotator must assign an integer score between 0 and 6 (inclusive) indicating how semantically similar the two words in a given pair are. A score of 6 indicates very high similarity (i.e., perfect synonymy), while zero indicates no similarity.",
"",
"2. Each annotator must score the entire set of 1,888 pairs in the dataset. The pairs must not be shared between different annotators.",
"",
"3. Annotators are able to break the workload over a period of approximately 2-3 weeks, and are able to use external sources (e.g. dictionaries, thesauri, WordNet) if required.",
"",
"4. Annotators are kept anonymous, and are not able to communicate with each other during the annotation process.",
"The selection criteria for the annotators required that all annotators must be native speakers of the target language. Preference to annotators with university education was given, but not required. Annotators were asked to complete a spreadsheet containing the translated pairs of words, as well as the part-of-speech, and a column to enter the score. The annotators did not have access to the original pairs in English.",
"To ensure the quality of the collected ratings, we have employed an adjudication protocol similar to the one proposed and validated by Pilehvar:2018emnlp. It consists of the following three rounds:",
"",
"Round 1: All annotators are asked to follow the instructions outlined above, and to rate all 1,888 pairs with integer scores between 0 and 6.",
"",
"Round 2: We compare the scores of all annotators and identify the pairs for each annotator that have shown the most disagreement. We ask the annotators to reconsider the assigned scores for those pairs only. The annotators may chose to either change or keep the scores. As in the case with Round 1, the annotators have no access to the scores of the other annotators, and the process is anonymous. This process gives a chance for annotators to correct errors or reconsider their judgments, and has been shown to be very effective in reaching consensus, as reported by BIBREF78. We used a very similar procedure as BIBREF78 to identify the pairs with the most disagreement; for each annotator, we marked the $i$th pair if the rated score $s_i$ falls within: $s_i \\ge \\mu _i + 1.5$ or $s_i \\le \\mu _i - 1.5$, where $\\mu _i$ is the mean of the other annotators' scores.",
"",
"Round 3: We compute the average agreement for each annotator (with the other annotators), by measuring the average Spearman's correlation against all other annotators. We discard the scores of annotators that have shown the least average agreement with all other annotators, while we maintain at least ten annotators per language by the end of this round. The actual process is done in multiple iterations: (S1) we measure the average agreement for each annotator with every other annotator (this corresponds to the APIAA measure, see later); (S2) if we still have more than 10 valid annotators and the lowest average score is higher than in the previous iteration, we remove the lowest one, and rerun S1. Table TABREF14 shows the number of annotators at both the start (Round 1) and end (Round 3) of our process for each language.",
"We measure the agreement between annotators using two metrics, average pairwise inter-annotator agreement (APIAA), and average mean inter-annotator agreement (AMIAA). Both of these use Spearman's correlation ($\\rho $) between annotators scores, the only difference is how they are averaged. They are computed as follows:",
"where $\\rho (s_i,s_j)$ is the Spearman's correlation between annotators $i$ and $j$'s scores ($s_i$,$s_j$) for all pairs in the dataset, and $N$ is the number of annotators. APIAA has been used widely as the standard measure for inter-annotator agreement, including in the original SimLex paper BIBREF14. It simply averages the pairwise Spearman's correlation between all annotators. On the other hand, AMIAA compares the average Spearman's correlation of one held-out annotator with the average of all the other $N-1$ annotators, and then averages across all $N$ `held-out' annotators. It smooths individual annotator effects and arguably serves as a better upper bound than APIAA BIBREF15, BIBREF91, BIBREF78.",
"We present the respective APIAA and AMIAA scores in Table TABREF16 and Table TABREF17 for all part-of-speech subsets, as well as the agreement for the full datasets. As reported in prior work BIBREF15, BIBREF91, AMIAA scores are typically higher than APIAA scores. Crucially, the results indicate `strong agreement' (across all languages) using both measurements. The languages with the highest annotator agreement were French (fra) and Yue Chinese (yue), while Russian (rus) had the lowest overall IAA scores. These scores, however, are still considered to be `moderately strong agreement'."
],
[
"Similarity Score Distributions. Across all languages, the average score (mean $=1.61$, median$=1.1$) is on the lower side of the similarity scale. However, looking closer at the scores of each language in Table TABREF19, we indicate notable differences in both the averages and the spread of scores. Notably, French has the highest average of similarity scores (mean$=2.61$, median$=2.5$), while Kiswahili has the lowest average (mean$=1.28$, median$=0.5$). Russian has the lowest spread ($\\sigma =1.37$), while Polish has the largest ($\\sigma =1.62$). All of the languages are strongly correlated with each other, as shown in Figure FIGREF20, where all of the Spearman's correlation coefficients are greater than 0.6 for all language pairs. Languages that share the same language family are highly correlated (e.g, cmn-yue, rus-pol, est-fin). In addition, we observe high correlations between English and most other languages, as expected. This is due to the effect of using English as the base/anchor language to create the dataset. In simple words, if one translates to two languages $L_1$ and $L_2$ starting from the same set of pairs in English, it is higly likely that $L_1$ and $L_2$ will diverge from English in different ways. Therefore, the similarity between $L_1$-eng and $L_2$-eng is expected to be higher than between $L_1$-$L_2$, especially if $L_1$ and $L_2$ are typologically dissimilar languages (e.g., heb-cmn, see Figure FIGREF20). This phenomenon is well documented in related prior work BIBREF25, BIBREF27, BIBREF16, BIBREF97. While we acknowledge this as a slight artifact of the dataset design, it would otherwise be impossible to construct a semantically aligned and comprehensive dataset across a large number of languages.",
"We also report differences in the distribution of the frequency of words among the languages in Multi-SimLex. Figure FIGREF22 shows six example languages, where each bar segment shows the proportion of words in each language that occur in the given frequency range. For example, the 10K-20K segment of the bars represents the proportion of words in the dataset that occur in the list of most frequent words between the frequency rank of 10,000 and 20,000 in that language; likewise with other intervals. Frequency lists for the presented languages are derived from Wikipedia and Common Crawl corpora. While many concept pairs are direct or approximate translations of English pairs, we can see that the frequency distribution does vary across different languages, and is also related to inherent language properties. For instance, in Finnish and Russian, while we use infinitive forms of all verbs, conjugated verb inflections are often more frequent in raw corpora than the corresponding infinitive forms. The variance can also be partially explained by the difference in monolingual corpora size used to derive the frequency rankings in the first place: absolute vocabulary sizes are expected to fluctuate across different languages. However, it is also important to note that the datasets also contain subsets of lower-frequency and rare words, which can be used for rare word evaluations in multiple languages, in the spirit of Pilehvar:2018emnlp's English rare word dataset.",
"",
"Cross-Linguistic Differences. Table TABREF23 shows some examples of average similarity scores of English, Spanish, Kiswahili and Welsh concept pairs. Remember that the scores range from 0 to 6: the higher the score, the more similar the participants found the concepts in the pair. The examples from Table TABREF23 show evidence of both the stability of average similarity scores across languages (unlikely – friendly, book – literature, and vanish – disappear), as well as language-specific differences (care – caution). Some differences in similarity scores seem to group languages into clusters. For example, the word pair regular – average has an average similarity score of 4.0 and 4.1 in English and Spanish, respectively, whereas in Kiswahili and Welsh the average similarity score of this pair is 0.5 and 0.8. We analyze this phenomenon in more detail in §SECREF25.",
"There are also examples for each of the four languages having a notably higher or lower similarity score for the same concept pair than the three other languages. For example, large – big in English has an average similarity score of 5.9, whereas Spanish, Kiswahili and Welsh speakers rate the closest concept pair in their native language to have a similarity score between 2.7 and 3.8. What is more, woman – wife receives an average similarity of 0.9 in English, 2.9 in Spanish, and greater than 4.0 in Kiswahili and Welsh. The examples from Spanish include banco – asiento (bank – seat) which receives an average similarity score 5.1, while in the other three languages the similarity score for this word pair does not exceed 0.1. At the same time, the average similarity score of espantosamente – fantásticamente (amazingly – fantastically) is much lower in Spanish (0.4) than in other languages (4.1 – 5.1). In Kiswahili, an example of a word pair with a higher similarity score than the rest would be machweo – jioni (sunset – evening), having an average score of 5.5, while the other languages receive 2.8 or less, and a notably lower similarity score is given to wa ajabu - mkubwa sana (wonderful – terrific), getting 0.9, while the other languages receive 5.3 or more. Welsh examples include yn llwyr - yn gyfan gwbl (purely – completely), which scores 5.4 among Welsh speakers but 2.3 or less in other languages, while addo – tyngu (promise – swear) is rated as 0 by all Welsh annotators, but in the other three languages 4.3 or more on average.",
"There can be several explanations for the differences in similarity scores across languages, including but not limited to cultural context, polysemy, metonymy, translation, regional and generational differences, and most commonly, the fact that words and meanings do not exactly map onto each other across languages. For example, it is likely that the other three languages do not have two separate words for describing the concepts in the concept pair: big – large, and the translators had to opt for similar lexical items that were more distant in meaning, explaining why in English the concept pair received a much higher average similarity score than in other languages. A similar issue related to the mapping problem across languages arose in the Welsh concept pair yn llwye – yn gyfan gwbl, where Welsh speakers agreed that the two concepts are very similar. When asked, bilingual speakers considered the two Welsh concepts more similar than English equivalents purely – completely, potentially explaining why a higher average similarity score was reached in Welsh. The example of woman – wife can illustrate cultural differences or another translation-related issue where the word `wife' did not exist in some languages (for example, Estonian), and therefore had to be described using other words, affecting the comparability of the similarity scores. This was also the case with the football – soccer concept pair. The pair bank – seat demonstrates the effect of the polysemy mismatch across languages: while `bank' has two different meanings in English, neither of them is similar to the word `seat', but in Spanish, `banco' can mean `bank', but it can also mean `bench'. Quite naturally, Spanish speakers gave the pair banco – asiento a higher similarity score than the speakers of languages where this polysemy did not occur.",
"An example of metonymy affecting the average similarity score can be seen in the Kiswahili version of the word pair: sunset – evening (machweo – jioni). The average similarity score for this pair is much higher in Kiswahili, likely because the word `sunset' can act as a metonym of `evening'. The low similarity score of wonderful – terrific in Kiswahili (wa ajabu - mkubwa sana) can be explained by the fact that while `mkubwa sana' can be used as `terrific' in Kiswahili, it technically means `very big', adding to the examples of translation- and mapping-related effects. The word pair amazingly – fantastically (espantosamente – fantásticamente) brings out another translation-related problem: the accuracy of the translation. While `espantosamente' could arguably be translated to `amazingly', more common meanings include: `frightfully', `terrifyingly', and `shockingly', explaining why the average similarity score differs from the rest of the languages. Another problem was brought out by addo – tyngu (promise – swear) in Welsh, where the `tyngu' may not have been a commonly used or even a known word choice for annotators, pointing out potential regional or generational differences in language use.",
"Table TABREF24 presents examples of concept pairs from English, Spanish, Kiswahili, and Welsh on which the participants agreed the most. For example, in English all participants rated the similarity of trial – test to be 4 or 5. In Spanish and Welsh, all participants rated start – begin to correspond to a score of 5 or 6. In Kiswahili, money – cash received a similarity rating of 6 from every participant. While there are numerous examples of concept pairs in these languages where the participants agreed on a similarity score of 4 or higher, it is worth noting that none of these languages had a single pair where all participants agreed on either 1-2, 2-3, or 3-4 similarity rating. Interestingly, in English all pairs where all the participants agreed on a 5-6 similarity score were adjectives."
],
[
"Based on the analysis in Figure FIGREF20 and inspecting the anecdotal examples in the previous section, it is evident that the correlation between similarity scores across languages is not random. To corroborate this intuition, we visualize the vectors of similarity scores for each single language by reducing their dimensionality to 2 via Principal Component Analysis BIBREF98. The resulting scatter plot in Figure FIGREF26 reveals that languages from the same family or branch have similar patterns in the scores. In particular, Russian and Polish (both Slavic), Finnish and Estonian (both Uralic), Cantonese and Mandarin Chinese (both Sinitic), and Spanish and French (both Romance) are all neighbors.",
"In order to quantify exactly the effect of language affinity on the similarity scores, we run correlation analyses between these and language features. In particular, we extract feature vectors from URIEL BIBREF99, a massively multilingual typological database that collects and normalizes information compiled by grammarians and field linguists about the world's languages. In particular, we focus on information about geography (the areas where the language speakers are concentrated), family (the phylogenetic tree each language belongs to), and typology (including syntax, phonological inventory, and phonology). Moreover, we consider typological representations of languages that are not manually crafted by experts, but rather learned from texts. BIBREF100 proposed to construct such representations by training language-identifying vectors end-to-end as part of neural machine translation models.",
"The vector for similarity judgments and the vector of linguistic features for a given language have different dimensionality. Hence, we first construct a distance matrix for each vector space, such that both columns and rows are language indices, and each cell value is the cosine distance between the vectors of the corresponding language pair. Given a set of L languages, each resulting matrix $S$ has dimensionality of $\\mathbb {R}^{|L| \\times |L|}$ and is symmetrical. To estimate the correlation between the matrix for similarity judgments and each of the matrices for linguistic features, we run a Mantel test BIBREF101, a non-parametric statistical test based on matrix permutations that takes into account inter-dependencies among pairwise distances.",
"The results of the Mantel test reported in Table FIGREF26 show that there exist statistically significant correlations between similarity judgments and geography, family, and syntax, given that $p < 0.05$ and $z > 1.96$. The correlation coefficient is particularly strong for geography ($r = 0.647$) and syntax ($r = 0.649$). The former result is intuitive, because languages in contact easily borrow and loan lexical units, and cultural interactions may result in similar cognitive categorizations. The result for syntax, instead, cannot be explained so easily, as formal properties of language do not affect lexical semantics. Instead, we conjecture that, while no causal relation is present, both syntactic features and similarity judgments might be linked to a common explanatory variable (such as geography). In fact, several syntactic properties are not uniformly spread across the globe. For instance, verbs with Verb–Object–Subject word order are mostly concentrated in Oceania BIBREF102. In turn, geographical proximity leads to similar judgment patterns, as mentioned above. On the other hand, we find no correlation with phonology and inventory, as expected, nor with the bottom-up typological features from BIBREF100."
],
[
"A crucial advantage of having semantically aligned monolingual datasets across different languages is the potential to create cross-lingual semantic similarity datasets. Such datasets allow for probing the quality of cross-lingual representation learning algorithms BIBREF27, BIBREF103, BIBREF104, BIBREF105, BIBREF106, BIBREF30, BIBREF107 as an intrinsic evaluation task. However, the cross-lingual datasets previous work relied upon BIBREF27 were limited to a homogeneous set of high-resource languages (e.g., English, German, Italian, Spanish) and a small number of concept pairs (all less than 1K pairs). We address both problems by 1) using a typologically more diverse language sample, and 2) relying on a substantially larger English dataset as a source for the cross-lingual datasets: 1,888 pairs in this work versus 500 pairs in the work of Camacho:2017semeval. As a result, each of our cross-lingual datasets contains a substantially larger number of concept pairs, as shown in Table TABREF30. The cross-lingual Multi-Simlex datasets are constructed automatically, leveraging word pair translations and annotations collected in all 12 languages. This yields a total of 66 cross-lingual datasets, one for each possible combination of languages. Table TABREF30 provides the final number of concept pairs, which lie between 2,031 and 3,480 pairs for each cross-lingual dataset, whereas Table TABREF29 shows some sample pairs with their corresponding similarity scores.",
"The automatic creation and verification of cross-lingual datasets closely follows the procedure first outlined by Camacho:2015acl and later adopted by Camacho:2017semeval (for semantic similarity) and Vulic:2019acl (for graded lexical entailment). First, given two languages, we intersect their aligned concept pairs obtained through translation. For instance, starting from the aligned pairs attroupement – foule in French and rahvasumm – rahvahulk in Estonian, we construct two cross-lingual pairs attroupement – rahvaluk and rahvasumm – foule. The scores of cross-lingual pairs are then computed as averages of the two corresponding monolingual scores. Finally, in order to filter out concept pairs whose semantic meaning was not preserved during this operation, we retain only cross-lingual pairs for which the corresponding monolingual scores $(s_s, s_t)$ differ at most by one fifth of the full scale (i.e., $\\mid s_s - s_t \\mid \\le 1.2$). This heuristic mitigates the noise due to cross-lingual semantic shifts BIBREF27, BIBREF97. We refer the reader to the work of Camacho:2015acl for a detailed technical description of the procedure. UTF8gbsn",
"To assess the quality of the resulting cross-lingual datasets, we have conducted a verification experiment similar to Vulic:2019acl. We randomly sampled 300 concept pairs in the English-Spanish, English-French, and English-Mandarin cross-lingual datasets. Subsequently, we asked bilingual native speakers to provide similarity judgments of each pair. The Spearman's correlation score $\\rho $ between automatically induced and manually collected ratings achieves $\\rho \\ge 0.90$ on all samples, which confirms the viability of the automatic construction procedure.",
"",
"Score and Class Distributions. The summary of score and class distributions across all 66 cross-lingual datasets are provided in Figure FIGREF31 and Figure FIGREF31, respectively. First, it is obvious that the distribution over the four POS classes largely adheres to that of the original monolingual Multi-SimLex datasets, and that the variance is quite low: e.g., the eng-fra dataset contains the lowest proportion of nouns (49.21%) and the highest proportion of verbs (27.1%), adjectives (15.28%), and adverbs (8.41%). On the other hand, the distribution over similarity intervals in Figure FIGREF31 shows a much greater variance. This is again expected as this pattern resurfaces in monolingual datasets (see Table TABREF19). It is also evident that the data are skewed towards lower-similarity concept pairs. However, due to the joint size of all cross-lingual datasets (see Table TABREF30), even the least represented intervals contain a substantial number of concept pairs. For instance, the rus-yue dataset contains the least highly similar concept pairs (in the interval $[4,6]$) of all 66 cross-lingual datasets. Nonetheless, the absolute number of pairs (138) in that interval for rus-yue is still substantial. If needed, this makes it possible to create smaller datasets which are balanced across the similarity spectra through sub-sampling."
],
[
"After the numerical and qualitative analyses of the Multi-SimLex datasets provided in §§ SECREF18–SECREF25, we now benchmark a series of representation learning models on the new evaluation data. We evaluate standard static word embedding algorithms such as fastText BIBREF31, as well as a range of more recent text encoders pretrained on language modeling such as multilingual BERT BIBREF29. These experiments provide strong baseline scores on the new Multi-SimLex datasets and offer a first large-scale analysis of pretrained encoders on word-level semantic similarity across diverse languages. In addition, the experiments now enabled by Multi-SimLex aim to answer several important questions. (Q1) Is it viable to extract high-quality word-level representations from pretrained encoders receiving subword-level tokens as input? Are such representations competitive with standard static word-level embeddings? (Q2) What are the implications of monolingual pretraining versus (massively) multilingual pretraining for performance? (Q3) Do lightweight unsupervised post-processing techniques improve word representations consistently across different languages? (Q4) Can we effectively transfer available external lexical knowledge from resource-rich languages to resource-lean languages in order to learn word representations that distinguish between true similarity and conceptual relatedness (see the discussion in §SECREF6)?"
],
[
"Static Word Embeddings in Different Languages. First, we evaluate a standard method for inducing non-contextualized (i.e., static) word embeddings across a plethora of different languages: fastText (ft) vectors BIBREF31 are currently the most popular and robust choice given 1) the availability of pretrained vectors in a large number of languages BIBREF108 trained on large Common Crawl (CC) plus Wikipedia (Wiki) data, and 2) their superior performance across a range of NLP tasks BIBREF109. In fact, fastText is an extension of the standard word-level CBOW and skip-gram word2vec models BIBREF32 that takes into account subword-level information, i.e. the contituent character n-grams of each word BIBREF110. For this reason, fastText is also more suited for modeling rare words and morphologically rich languages.",
"We rely on 300-dimensional ft word vectors trained on CC+Wiki and available online for 157 languages. The word vectors for all languages are obtained by CBOW with position-weights, with character n-grams of length 5, a window of size 5, 10 negative examples, and 10 training epochs. We also probe another (older) collection of ft vectors, pretrained on full Wikipedia dumps of each language.. The vectors are 300-dimensional, trained with the skip-gram objective for 5 epochs, with 5 negative examples, a window size set to 5, and relying on all character n-grams from length 3 to 6. Following prior work, we trim the vocabularies for all languages to the 200K most frequent words and compute representations for multi-word expressions by averaging the vectors of their constituent words.",
"",
"Unsupervised Post-Processing. Further, we consider a variety of unsupervised post-processing steps that can be applied post-training on top of any pretrained input word embedding space without any external lexical semantic resource. So far, the usefulness of such methods has been verified only on the English language through benchmarks for lexical semantics and sentence-level tasks BIBREF113. In this paper, we assess if unsupervised post-processing is beneficial also in other languages. To this end, we apply the following post-hoc transformations on the initial word embeddings:",
"",
"1) Mean centering (mc) is applied after unit length normalization to ensure that all vectors have a zero mean, and is commonly applied in data mining and analysis BIBREF114, BIBREF115.",
"",
"2) All-but-the-top (abtt) BIBREF113, BIBREF116 eliminates the common mean vector and a few top dominating directions (according to PCA) from the input distributional word vectors, since they do not contribute towards distinguishing the actual semantic meaning of different words. The method contains a single (tunable) hyper-parameter $dd_{A}$ which denotes the number of the dominating directions to remove from the initial representations. Previous work has verified the usefulness of abtt in several English lexical semantic tasks such as semantic similarity, word analogies, and concept categorization, as well as in sentence-level text classification tasks BIBREF113.",
"",
"3) uncovec BIBREF117 adjusts the similarity order of an arbitrary input word embedding space, and can emphasize either syntactic or semantic information in the transformed vectors. In short, it transforms the input space $\\mathbf {X}$ into an adjusted space $\\mathbf {X}\\mathbf {W}_{\\alpha }$ through a linear map $\\mathbf {W}_{\\alpha }$ controlled by a single hyper-parameter $\\alpha $. The $n^{\\text{th}}$-order similarity transformation of the input word vector space $\\mathbf {X}$ (for which $n=1$) can be obtained as $\\mathbf {M}_{n}(\\mathbf {X}) = \\mathbf {M}_1(\\mathbf {X}\\mathbf {W}_{(n-1)/2})$, with $\\mathbf {W}_{\\alpha }=\\mathbf {Q}\\mathbf {\\Gamma }^{\\alpha }$, where $\\mathbf {Q}$ and $\\mathbf {\\Gamma }$ are the matrices obtained via eigendecomposition of $\\mathbf {X}^T\\mathbf {X}=\\mathbf {Q}\\mathbf {\\Gamma }\\mathbf {Q}^T$. $\\mathbf {\\Gamma }$ is a diagonal matrix containing eigenvalues of $\\mathbf {X}^T\\mathbf {X}$; $\\mathbf {Q}$ is an orthogonal matrix with eigenvectors of $\\mathbf {X}^T\\mathbf {X}$ as columns. While the motivation for the uncovec methods does originate from adjusting discrete similarity orders, note that $\\alpha $ is in fact a continuous real-valued hyper-parameter which can be carefully tuned. For more technical details we refer the reader to the original work of BIBREF117.",
"",
"As mentioned, all post-processing methods can be seen as unsupervised retrofitting methods that, given an arbitrary input vector space $\\mathbf {X}$, produce a perturbed/transformed output vector space $\\mathbf {X}^{\\prime }$, but unlike common retrofitting methods BIBREF62, BIBREF16, the perturbation is completely unsupervised (i.e., self-contained) and does not inject any external (semantic similarity oriented) knowledge into the vector space. Note that different perturbations can also be stacked: e.g., we can apply uncovec and then use abtt on top the output uncovec vectors. When using uncovec and abtt we always length-normalize and mean-center the data first (i.e., we apply the simple mc normalization). Finally, we tune the two hyper-parameters $d_A$ (for abtt) and $\\alpha $ (uncovec) on the English Multi-SimLex and use the same values on the datasets of all other languages; we report results with $dd_A = 3$ or $dd_A = 10$, and $\\alpha =-0.3$.",
"",
"Contextualized Word Embeddings. We also evaluate the capacity of unsupervised pretraining architectures based on language modeling objectives to reason over lexical semantic similarity. To the best of our knowledge, our article is the first study performing such analyses. State-of-the-art models such as bert BIBREF29, xlm BIBREF30, or roberta BIBREF118 are typically very deep neural networks based on the Transformer architecture BIBREF119. They receive subword-level tokens as inputs (such as WordPieces BIBREF120) to tackle data sparsity. In output, they return contextualized embeddings, dynamic representations for words in context.",
"To represent words or multi-word expressions through a pretrained model, we follow prior work BIBREF121 and compute an input item's representation by 1) feeding it to a pretrained model in isolation; then 2) averaging the $H$ last hidden representations for each of the item’s constituent subwords; and then finally 3) averaging the resulting subword representations to produce the final $d$-dimensional representation, where $d$ is the embedding and hidden-layer dimensionality (e.g., $d=768$ with bert). We opt for this approach due to its proven viability and simplicity BIBREF121, as it does not require any additional corpora to condition the induction of contextualized embeddings. Other ways to extract the representations from pretrained models BIBREF122, BIBREF123, BIBREF124 are beyond the scope of this work, and we will experiment with them in the future.",
"In other words, we treat each pretrained encoder enc as a black-box function to encode a single word or a multi-word expression $x$ in each language into a $d$-dimensional contextualized representation $\\mathbf {x}_{\\textsc {enc}} \\in \\mathbb {R}^d = \\textsc {enc}(x)$ (e.g., $d=768$ with bert). As multilingual pretrained encoders, we experiment with the multilingual bert model (m-bert) BIBREF29 and xlm BIBREF30. m-bert is pretrained on monolingual Wikipedia corpora of 102 languages (comprising all Multi-SimLex languages) with a 12-layer Transformer network, and yields 768-dimensional representations. Since the concept pairs in Multi-SimLex are lowercased, we use the uncased version of m-bert. m-bert comprises all Multi-SimLex languages, and its evident ability to perform cross-lingual transfer BIBREF12, BIBREF125, BIBREF126 also makes it a convenient baseline model for cross-lingual experiments later in §SECREF8. The second multilingual model we consider, xlm-100, is pretrained on Wikipedia dumps of 100 languages, and encodes each concept into a $1,280$-dimensional representation. In contrast to m-bert, xlm-100 drops the next-sentence prediction objective and adds a cross-lingual masked language modeling objective. For both encoders, the representations of each concept are computed as averages over the last $H=4$ hidden layers in all experiments, as suggested by Wu:2019arxiv.",
"Besides m-bert and xlm, covering multiple languages, we also analyze the performance of “language-specific” bert and xlm models for the languages where they are available: Finnish, Spanish, English, Mandarin Chinese, and French. The main goal of this comparison is to study the differences in performance between multilingual “one-size-fits-all” encoders and language-specific encoders. For all experiments, we rely on the pretrained models released in the Transformers repository BIBREF127.",
"Unsupervised post-processing steps devised for static word embeddings (i.e., mean-centering, abtt, uncovec) can also be applied on top of contextualized embeddings if we predefine a vocabulary of word types $V$ that will be represented in a word vector space $\\mathbf {X}$. We construct such $V$ for each language as the intersection of word types covered by the corresponding CC+Wiki fastText vectors and the (single-word or multi-word) expressions appearing in the corresponding Multi-SimLex dataset.",
"Finally, note that it is not feasible to evaluate a full range of available pretrained encoders within the scope of this work. Our main intention is to provide the first set of baseline results on Multi-SimLex by benchmarking a sample of most popular encoders, at the same time also investigating other important questions such as performance of static versus contextualized word embeddings, or multilingual versus language-specific pretraining. Another purpose of the experiments is to outline the wide potential and applicability of the Multi-SimLex datasets for multilingual and cross-lingual representation learning evaluation."
],
[
"The results we report are Spearman's $\\rho $ coefficients of the correlation between the ranks derived from the scores of the evaluated models and the human scores provided in each Multi-SimLex dataset. The main results with static and contextualized word vectors for all test languages are summarized in Table TABREF43. The scores reveal several interesting patterns, and also pinpoint the main challenges for future work.",
"",
"State-of-the-Art Representation Models. The absolute scores of CC+Wiki ft, Wiki ft, and m-bert are not directly comparable, because these models have different coverage. In particular, Multi-SimLex contains some out-of-vocabulary (OOV) words whose static ft embeddings are not available. On the other hand, m-bert has perfect coverage. A general comparison between CC+Wiki and Wiki ft vectors, however, supports the intuition that larger corpora (such as CC+Wiki) yield higher correlations. Another finding is that a single massively multilingual model such as m-bert cannot produce semantically rich word-level representations. Whether this actually happens because the training objective is different—or because the need to represent 100+ languages reduces its language-specific capacity—is investigated further below.",
"The overall results also clearly indicate that (i) there are differences in performance across different monolingual Multi-SimLex datasets, and (ii) unsupervised post-processing is universally useful, and can lead to huge improvements in correlation scores for many languages. In what follows, we also delve deeper into these analyses.",
"",
"Impact of Unsupervised Post-Processing. First, the results in Table TABREF43 suggest that applying dimension-wise mean centering to the initial vector spaces has positive impact on word similarity scores in all test languages and for all models, both static and contextualized (see the +mc rows in Table TABREF43). BIBREF128 show that distributional word vectors have a tendency towards narrow clusters in the vector space (i.e., they occupy a narrow cone in the vector space and are therefore anisotropic BIBREF113, BIBREF129), and are prone to the undesired effect of hubness BIBREF130, BIBREF131. Applying dimension-wise mean centering has the effect of spreading the vectors across the hyper-plane and mitigating the hubness issue, which consequently improves word-level similarity, as it emerges from the reported results. Previous work has already validated the importance of mean centering for clustering-based tasks BIBREF132, bilingual lexicon induction with cross-lingual word embeddings BIBREF133, BIBREF134, BIBREF135, and for modeling lexical semantic change BIBREF136. However, to the best of our knowledge, the results summarized in Table TABREF43 are the first evidence that also confirms its importance for semantic similarity in a wide array of languages. In sum, as a general rule of thumb, we suggest to always mean-center representations for semantic tasks.",
"The results further indicate that additional post-processing methods such as abtt and uncovec on top of mean-centered vector spaces can lead to further gains in most languages. The gains are even visible for languages which start from high correlation scores: for instance., cmn with CC+Wiki ft increases from 0.534 to 0.583, from 0.315 to 0.526 with Wiki ft, and from 0.408 to 0.487 with m-bert. Similarly, for rus with CC+Wiki ft we can improve from 0.422 to 0.500, and for fra the scores improve from 0.578 to 0.613. There are additional similar cases reported in Table TABREF43.",
"Overall, the unsupervised post-processing techniques seem universally useful across languages, but their efficacy and relative performance does vary across different languages. Note that we have not carefully fine-tuned the hyper-parameters of the evaluated post-processing methods, so additional small improvements can be expected for some languages. The main finding, however, is that these post-processing techniques are robust to semantic similarity computations beyond English, and are truly language independent. For instance, removing dominant latent (PCA-based) components from word vectors emphasizes semantic differences between different concepts, as only shared non-informative latent semantic knowledge is removed from the representations.",
"In summary, pretrained word embeddings do contain more information pertaining to semantic similarity than revealed in the initial vectors. This way, we have corroborated the hypotheses from prior work BIBREF113, BIBREF117 which were not previously empirically verified on other languages due to a shortage of evaluation data; this gap has now been filled with the introduction of the Multi-SimLex datasets. In all follow-up experiments, we always explicitly denote which post-processing configuration is used in evaluation.",
"",
"POS-Specific Subsets. We present the results for subsets of word pairs grouped by POS class in Table TABREF46. Prior work based on English data showed that representations for nouns are typically of higher quality than those for the other POS classes BIBREF49, BIBREF137, BIBREF50. We observe a similar trend in other languages as well. This pattern is consistent across different representation models and can be attributed to several reasons. First, verb representations need to express a rich range of syntactic and semantic behaviors rather than purely referential features BIBREF138, BIBREF139, BIBREF73. Second, low correlation scores on the adjective and adverb subsets in some languages (e.g., pol, cym, swa) might be due to their low frequency in monolingual texts, which yields unreliable representations. In general, the variance in performance across different word classes warrants further research in class-specific representation learning BIBREF140, BIBREF50. The scores further attest the usefulness of unsupervised post-processing as almost all class-specific correlation scores are improved by applying mean-centering and abtt. Finally, the results for m-bert and xlm-100 in Table TABREF46 further confirm that massively multilingual pretraining cannot yield reasonable semantic representations for many languages: in fact, for some classes they display no correlation with human ratings at all.",
"",
"Differences across Languages. Naturally, the results from Tables TABREF43 and TABREF46 also reveal that there is variation in performance of both static word embeddings and pretrained encoders across different languages. Among other causes, the lowest absolute scores with ft are reported for languages with least resources available to train monolingual word embeddings, such as Kiswahili, Welsh, and Estonian. The low performance on Welsh is especially indicative: Figure FIGREF20 shows that the ratings in the Welsh dataset match up very well with the English ratings, but we cannot achieve the same level of correlation in Welsh with Welsh ft word embeddings. Difference in performance between two closely related languages, est (low-resource) and fin (high-resource), provides additional evidence in this respect.",
"The highest reported scores with m-bert and xlm-100 are obtained for Mandarin Chinese and Yue Chinese: this effectively points to the weaknesses of massively multilingual training with a joint subword vocabulary spanning 102 and 100 languages. Due to the difference in scripts, “language-specific” subwords for yue and cmn do not need to be shared across a vast amount of languages and the quality of their representation remains unscathed. This effectively means that m-bert's subword vocabulary contains plenty of cmn-specific and yue-specific subwords which are exploited by the encoder when producing m-bert-based representations. Simultaneously, higher scores with m-bert (and xlm in Table TABREF46) are reported for resource-rich languages such as French, Spanish, and English, which are better represented in m-bert's training data. We also observe lower absolute scores (and a larger number of OOVs) for languages with very rich and productive morphological systems such as the two Slavic languages (Polish and Russian) and Finnish. Since Polish and Russian are known to have large Wikipedias and Common Crawl data BIBREF33 (e.g., their Wikipedias are in the top 10 largest Wikipedias worldwide), the problem with coverage can be attributed exactly to the proliferation of morphological forms in those languages.",
"Finally, while Table TABREF43 does reveal that unsupervised post-processing is useful for all languages, it also demonstrates that peak scores are achieved with different post-processing configurations. This finding suggests that a more careful language-specific fine-tuning is indeed needed to refine word embeddings towards semantic similarity. We plan to inspect the relationship between post-processing techniques and linguistic properties in more depth in future work.",
"",
"Multilingual vs. Language-Specific Contextualized Embeddings. Recent work has shown that—despite the usefulness of massively multilingual models such as m-bert and xlm-100 for zero-shot cross-lingual transfer BIBREF12, BIBREF125—stronger results in downstream tasks for a particular language can be achieved by pretraining language-specific models on language-specific data. In this experiment, motivated by the low results of m-bert and xlm-100 (see again Table TABREF46), we assess if monolingual pretrained encoders can produce higher-quality word-level representations than multilingual models. Therefore, we evaluate language-specific bert and xlm models for a subset of the Multi-SimLex languages for which such models are currently available: Finnish BIBREF141 (bert-base architecture, uncased), French BIBREF142 (the FlauBERT model based on xlm), English (bert-base, uncased), Mandarin Chinese (bert-base) BIBREF29 and Spanish (bert-base, uncased). In addition, we also evaluate a series of pretrained encoders available for English: (i) bert-base, bert-large, and bert-large with whole word masking (wwm) from the original work on BERT BIBREF29, (ii) monolingual “English-specific” xlm BIBREF30, and (iii) two models which employ parameter reduction techniques to build more compact encoders: albert-b uses a configuration similar to bert-base, while albert-l is similar to bert-large, but with an $18\\times $ reduction in the number of parameters BIBREF143.",
"From the results in Table FIGREF49, it is clear that monolingual pretrained encoders yield much more reliable word-level representations. The gains are visible even for languages such as cmn which showed reasonable performance with m-bert and are substantial on all test languages. This further confirms the validity of language-specific pretraining in lieu of multilingual training, if sufficient monolingual data are available. Moreover, a comparison of pretrained English encoders in Figure FIGREF49 largely follows the intuition: the larger bert-large model yields slight improvements over bert-base, and we can improve a bit more by relying on word-level (i.e., lexical-level) masking.Finally, light-weight albert model variants are quite competitive with the original bert models, with only modest drops reported, and albert-l again outperforms albert-b. Overall, it is interesting to note that the scores obtained with monolingual pretrained encoders are on a par with or even outperform static ft word embeddings: this is a very intriguing finding per se as it shows that such subword-level models trained on large corpora can implicitly capture rich lexical semantic knowledge.",
"",
"Similarity-Specialized Word Embeddings. Conflating distinct lexico-semantic relations is a well-known property of distributional representations BIBREF144, BIBREF53. Semantic specialization fine-tunes distributional spaces to emphasize a particular lexico-semantic relation in the transformed space by injecting external lexical knowledge BIBREF145. Explicitly discerning between true semantic similarity (as captured in Multi-SimLex) and broad conceptual relatedness benefits a number of tasks, as discussed in §SECREF4. Since most languages lack dedicated lexical resources, however, one viable strategy to steer monolingual word vector spaces to emphasize semantic similarity is through cross-lingual transfer of lexical knowledge, usually through a shared cross-lingual word vector space BIBREF106. Therefore, we evaluate the effectiveness of specialization transfer methods using Multi-SimLex as our multilingual test bed.",
"We evaluate a current state-of-the-art cross-lingual specialization transfer method with minimal requirements, put forth recently by Ponti:2019emnlp. In a nutshell, their li-postspec method is a multi-step procedure that operates as follows. First, the knowledge about semantic similarity is extracted from WordNet in the form of triplets, that is, linguistic constraints $(w_1, w_2, r)$, where $w_1$ and $w_2$ are two concepts, and $r$ is a relation between them obtained from WordNet (e.g., synonymy or antonymy). The goal is to “attract” synonyms closer to each other in the transformed vector space as they reflect true semantic similarity, and “repel” antonyms further apart. In the second step, the linguistic constraints are translated from English to the target language via a shared cross-lingual word vector space. To this end, following Ponti:2019emnlp we rely on cross-lingual word embeddings (CLWEs) BIBREF146 available online, which are based on Wiki ft vectors. Following that, a constraint refinement step is applied in the target language which aims to eliminate the noise inserted during the translation process. This is done by training a relation classification tool: it is trained again on the English linguistic constraints and then used on the translated target language constraints, where the transfer is again enabled via a shared cross-lingual word vector space. Finally, a state-of-the-art monolingual specialization procedure from Ponti:2018emnlp injects the (now target language) linguistic constraints into the target language distributional space.",
"The scores are summarized in Table TABREF56. Semantic specialization with li-postspec leads to substantial improvements in correlation scores for the majority of the target languages, demonstrating the importance of external semantic similarity knowledge for semantic similarity reasoning. However, we also observe deteriorated performance for the three target languages which can be considered the lowest-resource ones in our set: cym, swa, yue. We hypothesize that this occurs due to the inferior quality of the underlying monolingual Wikipedia word embeddings, which generates a chain of error accumulations. In particular, poor distributional word estimates compromise the alignment of the embedding spaces, which in turn results in increased translation noise, and reduced refinement ability of the relation classifier. On a high level, this “poor get poorer” observation again points to the fact that one of the primary causes of low performance of resource-low languages in semantic tasks is the sheer lack of even unlabeled data for distributional training. On the other hand, as we see from Table TABREF46, typological dissimilarity between the source and the target does not deteriorate the effectiveness of semantic specialization. In fact, li-postspec does yield substantial gains also for the typologically distant targets such as heb, cmn, and est. The critical problem indeed seems to be insufficient raw data for monolingual distributional training."
],
[
"Similar to monolingual evaluation in §SECREF7, we now evaluate several state-of-the-art cross-lingual representation models on the suite of 66 automatically constructed cross-lingual Multi-SimLex datasets. Again, note that evaluating a full range of cross-lingual models available in the rich prior work on cross-lingual representation learning is well beyond the scope of this article. We therefore focus our cross-lingual analyses on several well-established and indicative state-of-the-art cross-lingual models, again spanning both static and contextualized cross-lingual word embeddings."
],
[
"Static Word Embeddings. We rely on a state-of-the-art mapping-based method for the induction of cross-lingual word embeddings (CLWEs): vecmap BIBREF148. The core idea behind such mapping-based or projection-based approaches is to learn a post-hoc alignment of independently trained monolingual word embeddings BIBREF106. Such methods have gained popularity due to their conceptual simplicity and competitive performance coupled with reduced bilingual supervision requirements: they support CLWE induction with only as much as a few thousand word translation pairs as the bilingual supervision BIBREF149, BIBREF150, BIBREF151, BIBREF107. More recent work has shown that CLWEs can be induced with even weaker supervision from small dictionaries spanning several hundred pairs BIBREF152, BIBREF135, identical strings BIBREF153, or even only shared numerals BIBREF154. In the extreme, fully unsupervised projection-based CLWEs extract such seed bilingual lexicons from scratch on the basis of monolingual data only BIBREF103, BIBREF148, BIBREF155, BIBREF156, BIBREF104, BIBREF157.",
"Recent empirical studies BIBREF158, BIBREF135, BIBREF159 have compared a variety of unsupervised and weakly supervised mapping-based CLWE methods, and vecmap emerged as the most robust and very competitive choice. Therefore, we focus on 1) its fully unsupervised variant (unsuper) in our comparisons. For several language pairs, we also report scores with two other vecmap model variants: 2) a supervised variant which learns a mapping based on an available seed lexicon (super), and 3) a supervised variant with self-learning (super+sl) which iteratively increases the seed lexicon and improves the mapping gradually. For a detailed description of these variants, we refer the reader to recent work BIBREF148, BIBREF135. We again use CC+Wiki ft vectors as initial monolingual word vectors, except for yue where Wiki ft is used. The seed dictionaries of two different sizes (1k and 5k translation pairs) are based on PanLex BIBREF160, and are taken directly from prior work BIBREF135, or extracted from PanLex following the same procedure as in the prior work.",
"",
"Contextualized Cross-Lingual Word Embeddings. We again evaluate the capacity of (massively) multilingual pretrained language models, m-bert and xlm-100, to reason over cross-lingual lexical similarity. Implicitly, such an evaluation also evaluates “the intrinsic quality” of shared cross-lingual word-level vector spaces induced by these methods, and their ability to boost cross-lingual transfer between different language pairs. We rely on the same procedure of aggregating the models' subword-level parameters into word-level representations, already described in §SECREF34.",
"As in monolingual settings, we can apply unsupervised post-processing steps such as abtt to both static and contextualized cross-lingual word embeddings."
],
[
"Main Results and Differences across Language Pairs. A summary of the results on the 66 cross-lingual Multi-SimLex datasets are provided in Table TABREF59 and Figure FIGREF60. The findings confirm several interesting findings from our previous monolingual experiments (§SECREF44), and also corroborate several hypotheses and findings from prior work, now on a large sample of language pairs and for the task of cross-lingual semantic similarity.",
"First, we observe that the fully unsupervised vecmap model, despite being the most robust fully unsupervised method at present, fails to produce a meaningful cross-lingual word vector space for a large number of language pairs (see the bottom triangle of Table TABREF59): many correlation scores are in fact no-correlation results, accentuating the problem of fully unsupervised cross-lingual learning for typologically diverse languages and with fewer amounts of monolingual data BIBREF135. The scores are particularly low across the board for lower-resource languages such as Welsh and Kiswahili. It also seems that the lack of monolingual data is a larger problem than typological dissimilarity between language pairs, as we do observe reasonably high correlation scores with vecmap for language pairs such as cmn-spa, heb-est, and rus-fin. However, typological differences (e.g., morphological richness) still play an important role as we observe very low scores when pairing cmn with morphologically rich languages such fin, est, pol, and rus. Similar to prior work of Vulic:2019we and doval2019onthe, given the fact that unsupervised vecmap is the most robust unsupervised CLWE method at present BIBREF158, our results again question the usefulness of fully unsupervised approaches for a large number of languages, and call for further developments in the area of unsupervised and weakly supervised cross-lingual representation learning.",
"The scores of m-bert and xlm-100 lead to similar conclusions as in the monolingual settings. Reasonable correlation scores are achieved only for a small subset of resource-rich language pairs (e.g., eng, fra, spa, cmn) which dominate the multilingual m-bert training. Interestingly, the scores indicate a much higher performance of language pairs where yue is one of the languages when we use m-bert instead of vecmap. This boils down again to the fact that yue, due to its specific language script, has a good representation of its words and subwords in the shared m-bert vocabulary. At the same time, a reliable vecmap mapping between yue and other languages cannot be found due to a small monolingual yue corpus. In cases when vecmap does not yield a degenerate cross-lingual vector space starting from two monolingual ones, the final correlation scores seem substantially higher than the ones obtained by the single massively multilingual m-bert model.",
"Finally, the results in Figure FIGREF60 again verify the usefulness of unsupervised post-processing also in cross-lingual settings. We observe improved performance with both m-bert and xlm-100 when mean centering (+mc) is applied, and further gains can be achieved by using abtt on the mean-centered vector spaces. A similar finding also holds for static cross-lingual word embeddings, where applying abbt (-10) yields higher scores on 61/66 language pairs.",
"",
"Fully Unsupervised vs. Weakly Supervised Cross-Lingual Embeddings. The results in Table TABREF59 indicate that fully unsupervised cross-lingual learning fails for a large number of language pairs. However, recent work BIBREF135 has noted that these sub-optimal non-alignment solutions with the unsuper model can be avoided by relying on (weak) cross-lingual supervision spanning only several thousands or even hundreds of word translation pairs. Therefore, we examine 1) if we can further improve the results on cross-lingual Multi-SimLex resorting to (at least some) cross-lingual supervision for resource-rich language pairs; and 2) if such available word-level supervision can also be useful for a range of languages which displayed near-zero performance in Table TABREF59. In other words, we test if recent “tricks of the trade” used in the rich literature on CLWE learning reflect in gains on cross-lingual Multi-SimLex datasets.",
"First, we reassess the findings established on the bilingual lexicon induction task BIBREF161, BIBREF135: using at least some cross-lingual supervision is always beneficial compared to using no supervision at all. We report improvements over the unsuper model for all 10 language pairs in Table TABREF66, even though the unsuper method initially produced strong correlation scores. The importance of self-learning increases with decreasing available seed dictionary size, and the +sl model always outperforms unsuper with 1k seed pairs; we observe the same patterns also with even smaller dictionary sizes than reported in Table TABREF66 (250 and 500 seed pairs). Along the same line, the results in Table TABREF67 indicate that at least some supervision is crucial for the success of static CLWEs on resource-leaner language pairs. We note substantial improvements on all language pairs; in fact, the vecmap model is able to learn a more reliable mapping starting from clean supervision. We again note large gains with self-learning.",
"",
"Multilingual vs. Bilingual Contextualized Embeddings. Similar to the monolingual settings, we also inspect if massively multilingual training in fact dilutes the knowledge necessary for cross-lingual reasoning on a particular language pair. Therefore, we compare the 100-language xlm-100 model with i) a variant of the same model trained on a smaller set of 17 languages (xlm-17); ii) a variant of the same model trained specifically for the particular language pair (xlm-2); and iii) a variant of the bilingual xlm-2 model that also leverages bilingual knowledge from parallel data during joint training (xlm-2++). We again use the pretrained models made available by Conneau:2019nips, and we refer to the original work for further technical details.",
"The results are summarized in Figure FIGREF60, and they confirm the intuition that massively multilingual pretraining can damage performance even on resource-rich languages and language pairs. We observe a steep rise in performance when the multilingual model is trained on a much smaller set of languages (17 versus 100), and further improvements can be achieved by training a dedicated bilingual model. Finally, leveraging bilingual parallel data seems to offer additional slight gains, but a tiny difference between xlm-2 and xlm-2++ also suggests that this rich bilingual information is not used in the optimal way within the xlm architecture for semantic similarity.",
"In summary, these results indicate that, in order to improve performance in cross-lingual transfer tasks, more work should be invested into 1) pretraining dedicated language pair-specific models, and 2) creative ways of leveraging available cross-lingual supervision (e.g., word translation pairs, parallel or comparable corpora) BIBREF121, BIBREF123, BIBREF124 with pretraining paradigms such as bert and xlm. Using such cross-lingual supervision could lead to similar benefits as indicated by the results obtained with static cross-lingual word embeddings (see Table TABREF66 and Table TABREF67). We believe that Multi-SimLex can serve as a valuable means to track and guide future progress in this research area."
],
[
"We have presented Multi-SimLex, a resource containing human judgments on the semantic similarity of word pairs for 12 monolingual and 66 cross-lingual datasets. The languages covered are typologically diverse and include also under-resourced ones, such as Welsh and Kiswahili. The resource covers an unprecedented amount of 1,888 word pairs, carefully balanced according to their similarity score, frequency, concreteness, part-of-speech class, and lexical field. In addition to Multi-Simlex, we release the detailed protocol we followed to create this resource. We hope that our consistent guidelines will encourage researchers to translate and annotate Multi-Simlex -style datasets for additional languages. This can help and create a hugely valuable, large-scale semantic resource for multilingual NLP research.",
"The core Multi-SimLex we release with this paper already enables researchers to carry out novel linguistic analysis as well as establishes a benchmark for evaluating representation learning models. Based on our preliminary analyses, we found that speakers of closely related languages tend to express equivalent similarity judgments. In particular, geographical proximity seems to play a greater role than family membership in determining the similarity of judgments across languages. Moreover, we tested several state-of-the-art word embedding models, both static and contextualized representations, as well as several (supervised and unsupervised) post-processing techniques, on the newly released Multi-SimLex. This enables future endeavors to improve multilingual representation learning with challenging baselines. In addition, our results provide several important insights for research on both monolingual and cross-lingual word representations:",
"",
"1) Unsupervised post-processing techniques (mean centering, elimination of top principal components, adjusting similarity orders) are always beneficial independently of the language, although the combination leading to the best scores is language-specific and hence needs to be tuned.",
"",
"2) Similarity rankings obtained from word embeddings for nouns are better aligned with human judgments than all the other part-of-speech classes considered here (verbs, adjectives, and, for the first time, adverbs). This confirms previous generalizations based on experiments on English.",
"",
"3) The factor having the greatest impact on the quality of word representations is the availability of raw texts to train them in the first place, rather than language properties (such as family, geographical area, typological features).",
"",
"4) Massively multilingual pretrained encoders such as m-bert BIBREF29 and xlm-100 BIBREF30 fare quite poorly on our benchmark, whereas pretrained encoders dedicated to a single language are more competitive with static word embeddings such as fastText BIBREF31. Moreover, for language-specific encoders, parameter reduction techniques reduce performance only marginally.",
"",
"5) Techniques to inject clean lexical semantic knowledge from external resources into distributional word representations were proven to be effective in emphasizing the relation of semantic similarity. In particular, methods capable of transferring such knowledge from resource-rich to resource-lean languages BIBREF59 increased the correlation with human judgments for most languages, except for those with limited unlabelled data.",
"Future work can expand our preliminary, yet large-scale study on the ability of pretrained encoders to reason over word-level semantic similarity in different languages. For instance, we have highlighted how sharing the same encoder parameters across multiple languages may harm performance. However, it remains unclear if, and to what extent, the input language embeddings present in xlm-100 but absent in m-bert help mitigate this issue. In addition, pretrained language embeddings can be obtained both from typological databases BIBREF99 and from neural architectures BIBREF100. Plugging these embeddings into the encoders in lieu of embeddings trained end-to-end as suggested by prior work BIBREF162, BIBREF163, BIBREF164 might extend the coverage to more resource-lean languages.",
"Another important follow-up analysis might involve the comparison of the performance of representation learning models on multilingual datasets for both word-level semantic similarity and sentence-level Natural Language Understanding. In particular, Multi-SimLex fills a gap in available resources for multilingual NLP and might help understand how lexical and compositional semantics interact if put alongside existing resources such as XNLI BIBREF84 for natural language inference or PAWS-X BIBREF89 for cross-lingual paraphrase identification. Finally, the Multi-SimLex annotation could turn out to be a unique source of evidence to study the effects of polysemy in human judgments on semantic similarity: for equivalent word pairs in multiple languages, are the similarity scores affected by how many senses the two words (or multi-word expressions) incorporate?",
"In light of the success of initiatives like Universal Dependencies for multilingual treebanks, we hope that making Multi-SimLex and its guidelines available will encourage other researchers to expand our current sample of languages. We particularly encourage creation and submission of comparable Multi-SimLex datasets for under-resourced and typologically diverse languages in future work. In particular, we have made a Multi-Simlex community website available to facilitate easy creation, gathering, dissemination, and use of annotated datasets: https://multisimlex.com/.",
"This work is supported by the ERC Consolidator Grant LEXICAL: Lexical Acquisition Across Languages (no 648909). Thierry Poibeau is partly supported by a PRAIRIE 3IA Institute fellowship (\"Investissements d'avenir\" program, reference ANR-19-P3IA-0001)."
]
],
"section_name": [
"Introduction",
"Lexical Semantic Similarity ::: Similarity and Association",
"Lexical Semantic Similarity ::: Similarity for NLP: Intrinsic Evaluation and Semantic Specialization",
"Lexical Semantic Similarity ::: Similarity and Language Variation: Semantic Typology",
"Previous Work and Evaluation Data",
"The Base for Multi-SimLex: Extending English SimLex-999",
"Multi-SimLex: Translation and Annotation",
"Multi-SimLex: Translation and Annotation ::: Word Pair Translation",
"Multi-SimLex: Translation and Annotation ::: Guidelines and Word Pair Scoring",
"Multi-SimLex: Translation and Annotation ::: Data Analysis",
"Multi-SimLex: Translation and Annotation ::: Effect of Language Affinity on Similarity Scores",
"Cross-Lingual Multi-SimLex Datasets",
"Monolingual Evaluation of Representation Learning Models",
"Monolingual Evaluation of Representation Learning Models ::: Models in Comparison",
"Monolingual Evaluation of Representation Learning Models ::: Results and Discussion",
"Cross-Lingual Evaluation",
"Cross-Lingual Evaluation ::: Models in Comparison",
"Cross-Lingual Evaluation ::: Results and Discussion",
"Conclusion and Future Work"
]
} | {
"answers": [
{
"annotation_id": [
"23af86e154dab10484d0ba84c8cf35a9e08adbae"
],
"answer": [
{
"evidence": [
"Across all languages, 145 human annotators were asked to score all 1,888 pairs (in their given language). We finally collect at least ten valid annotations for each word pair in each language. All annotators were required to abide by the following instructions:",
"1. Each annotator must assign an integer score between 0 and 6 (inclusive) indicating how semantically similar the two words in a given pair are. A score of 6 indicates very high similarity (i.e., perfect synonymy), while zero indicates no similarity.",
"2. Each annotator must score the entire set of 1,888 pairs in the dataset. The pairs must not be shared between different annotators.",
"3. Annotators are able to break the workload over a period of approximately 2-3 weeks, and are able to use external sources (e.g. dictionaries, thesauri, WordNet) if required.",
"4. Annotators are kept anonymous, and are not able to communicate with each other during the annotation process."
],
"extractive_spans": [
"1. Each annotator must assign an integer score between 0 and 6 (inclusive) indicating how semantically similar the two words in a given pair are. A score of 6 indicates very high similarity (i.e., perfect synonymy), while zero indicates no similarity.",
"2. Each annotator must score the entire set of 1,888 pairs in the dataset.",
" able to use external sources (e.g. dictionaries, thesauri, WordNet) if required",
"not able to communicate with each other during the annotation process"
],
"free_form_answer": "",
"highlighted_evidence": [
"Across all languages, 145 human annotators were asked to score all 1,888 pairs (in their given language). We finally collect at least ten valid annotations for each word pair in each language. All annotators were required to abide by the following instructions:\n\n1. Each annotator must assign an integer score between 0 and 6 (inclusive) indicating how semantically similar the two words in a given pair are. A score of 6 indicates very high similarity (i.e., perfect synonymy), while zero indicates no similarity.\n\n2. Each annotator must score the entire set of 1,888 pairs in the dataset. The pairs must not be shared between different annotators.\n\n3. Annotators are able to break the workload over a period of approximately 2-3 weeks, and are able to use external sources (e.g. dictionaries, thesauri, WordNet) if required.\n\n4. Annotators are kept anonymous, and are not able to communicate with each other during the annotation process."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"1a41d3c7739acf72a458c61a337c3a2ff035ff55",
"8f6295cb84a7694134bfd4ffcb75832a6e23ff59"
],
"answer": [
{
"evidence": [
"Language Selection. Multi-SimLex comprises eleven languages in addition to English. The main objective for our inclusion criteria has been to balance language prominence (by number of speakers of the language) for maximum impact of the resource, while simultaneously having a diverse suite of languages based on their typological features (such as morphological type and language family). Table TABREF10 summarizes key information about the languages currently included in Multi-SimLex. We have included a mixture of fusional, agglutinative, isolating, and introflexive languages that come from eight different language families. This includes languages that are very widely used such as Chinese Mandarin and Spanish, and low-resource languages such as Welsh and Kiswahili. We hope to further include additional languages and inspire other researchers to contribute to the effort over the lifetime of this project.",
"FLOAT SELECTED: Table 1: The list of 12 languages in the Multi-SimLex multilingual suite along with their corresponding language family (IE = Indo-European), broad morphological type, and their ISO 639-3 code. The number of speakers is based on the total count of L1 and L2 speakers, according to ethnologue.com."
],
"extractive_spans": [],
"free_form_answer": "Chinese Mandarin, Welsh, English, Estonian, Finnish, French, Hebrew, Polish, Russian, Spanish, Kiswahili, Yue Chinese",
"highlighted_evidence": [
"Table TABREF10 summarizes key information about the languages currently included in Multi-SimLex.",
"FLOAT SELECTED: Table 1: The list of 12 languages in the Multi-SimLex multilingual suite along with their corresponding language family (IE = Indo-European), broad morphological type, and their ISO 639-3 code. The number of speakers is based on the total count of L1 and L2 speakers, according to ethnologue.com."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 1: The list of 12 languages in the Multi-SimLex multilingual suite along with their corresponding language family (IE = Indo-European), broad morphological type, and their ISO 639-3 code. The number of speakers is based on the total count of L1 and L2 speakers, according to ethnologue.com."
],
"extractive_spans": [],
"free_form_answer": "Chinese Mandarin, Welsh, English, Estonian, Finnish, French, Hebrew, Polish, Russian, Spanish, Kiswahili, Yue Chinese",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: The list of 12 languages in the Multi-SimLex multilingual suite along with their corresponding language family (IE = Indo-European), broad morphological type, and their ISO 639-3 code. The number of speakers is based on the total count of L1 and L2 speakers, according to ethnologue.com."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
],
"nlp_background": [
"",
""
],
"paper_read": [
"",
""
],
"question": [
"How were the datasets annotated?",
"What are the 12 languages covered?"
],
"question_id": [
"71b1af123fe292fd9950b8439db834212f0b0e32",
"a616a3f0d244368ec588f04dfbc37d77fda01b4c"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"spanish",
"spanish"
],
"topic_background": [
"",
""
]
} | {
"caption": [
"Table 1: The list of 12 languages in the Multi-SimLex multilingual suite along with their corresponding language family (IE = Indo-European), broad morphological type, and their ISO 639-3 code. The number of speakers is based on the total count of L1 and L2 speakers, according to ethnologue.com.",
"Table 2: Inter-translator agreement (% of matched translated words) by independent translators using a randomly selected 100-pair English sample from the Multi-SimLex dataset, and the corresponding 100-pair samples from the other datasets.",
"Table 3: Number of human annotators. R1 = Annotation Round 1, R3 = Round 3.",
"Table 4: Average pairwise inter-annotator agreement (APIAA). A score of 0.6 and above indicates strong agreement.",
"Table 5: Average mean inter-annotator agreement (AMIAA). A score of 0.6 and above indicates strong agreement.",
"Table 6: Fine-grained distribution of concept pairs over different rating intervals in each Multi-SimLex language, reported as percentages. The total number of concept pairs in each dataset is 1,888.",
"Figure 1: Spearman’s correlation coefficient (ρ) of the similarity scores for all languages in Multi-SimLex.",
"Figure 2: A distribution over different frequency ranges for words from Multi-SimLex datasets for selected languages. Multi-word expressions are excluded from the analysis.",
"Table 7: Examples of concept pairs with their similarity scores from four languages. For brevity, only the original English concept pair is included, but note that the pair is translated to all target languages, see §5.1.",
"Table 8: Examples of concept pairs with their similarity scores from four languages where all participants show strong agreement in their rating.",
"Figure 3: PCA of the language vectors resulting from the concatenation of similarity judgments for all pairs.",
"Table 9: Mantel test on the correlation between similarity judgments from Multi-SimLex and linguistic features from typological databases.",
"Table 10: Example concept pairs with their scores from a selection of cross-lingual MultiSimLex datasets.",
"Table 11: The sizes of all monolingual (main diagonal) and cross-lingual datasets.",
"Figure 4: (a) Rating distribution and (b) distribution of pairs over the four POS classes in cross-lingual Multi-SimLex datasets averaged across each of the 66 language pairs (y axes plot percentages as the total number of concept pairs varies across different cross-lingual datasets). Minimum and maximum percentages for each rating interval and POS class are also plotted.",
"Table 12: A summary of results (Spearman’s ρ correlation scores) on the full monolingual Multi-SimLex datasets for 12 languages. We benchmark fastText word embeddings trained on two different corpora (CC+Wiki and only Wiki) as well the multilingual MBERT model (see §7.1). Results with the initial word vectors are reported (i.e., without any unsupervised post-processing), as well as with different unsupervised post-processing methods, described in §7.1. The language codes are provided in Table 1. The numbers in the parentheses (gray rows) refer to the number of OOV concepts excluded from the computation. The highest scores for each language and per model are in bold.",
"Table 13: Spearman’s ρ correlation scores over the four POS classes represented in MultiSimLex datasets. In addition to the word vectors considered earlier in Table 12, we also report scores for another contextualized model, XLM-100. The numbers in parentheses refer to the total number of POS-class pairs in the original ENG dataset and, consequently, in all other monolingual datasets.",
"Figure 5: (a) A performance comparison between monolingual pretrained language encoders and massively multilingual encoders. For four languages (CMN, ENG, FIN, SPA), we report the scores with monolingual uncased BERT-BASE architectures and multilingual uncased M-BERT model, while for FRA we report the results of the multilingual XLM-100 architecture and a monolingual French FlauBERT model (Le et al. 2019), which is based on the same architecture as XLM-100. (b) A comparison of various pretrained encoders available for English. All these models are post-processed via ABTT (-3).",
"Table 14: The impact of vector space specialization for semantic similarity. The scores are reported using the current state-of-the-art specialization transfer LI-POSTSPEC method of Ponti et al. (2019c), relying on English as a resource-rich source language and the external lexical semantic knowledge from the English WordNet.",
"Table 15: Spearman’s ρ correlation scores on all 66 cross-lingual datasets. 1) The scores below the main diagonal are computed based on cross-lingual word embeddings (CLWEs) induced by aligning CC+Wiki FT in all languages (except for YUE where we use Wiki FT) in a fully unsupervised way (i.e., without any bilingual supervision). We rely on a standard CLWE mapping-based (i.e., alignment) approach: VECMAP (Artetxe, Labaka, and Agirre 2018b). 2) The scores above the main diagonal are computed by obtaining 768-dimensional word-level vectors from pretrained multilingual BERT (MBERT) following the procedure described in §7.1. For both fully unsupervised VECMAP and M-BERT, we report the results with unsupervised postprocessing enabled: all 2× 66 reported scores are obtained using the +ABBT (-10) variant.",
"Figure 6: Further performance analyses of cross-lingual Multi-SimLex datasets. (a) Spearman’s ρ correlation scores averaged over all 66 cross-lingual Multi-SimLex datasets for two pretrained multilingual encoders (M-BERT and XLM). The scores are obtained with different configurations that exclude (INIT) or enable unsupervised post-processing. (b) A comparison of various pretrained encoders available for the English-French language pair, see the main text for a short description of each benchmarked pretrained encoder.",
"Table 16: Results on a selection of cross-lingual Multi-SimLex datasets where the fully unsupervised (UNSUPER) CLWE variant yields reasonable performance. We also show the results with supervised VECMAP without self-learning (SUPER) and with self-learning (+SL), with two seed dictionary sizes: 1k and 5k pairs; see §8.1 for more detail. Highest scores for each language pair are in bold.",
"Table 17: Results on a selection of cross-lingual Multi-SimLex datasets where the fully unsupervised (UNSUPER) CLWE variant fails to learn a coherent shared cross-lingual space. See also the caption of Table 16."
],
"file": [
"11-Table1-1.png",
"12-Table2-1.png",
"13-Table3-1.png",
"14-Table4-1.png",
"14-Table5-1.png",
"15-Table6-1.png",
"15-Figure1-1.png",
"16-Figure2-1.png",
"16-Table7-1.png",
"18-Table8-1.png",
"19-Figure3-1.png",
"20-Table9-1.png",
"21-Table10-1.png",
"21-Table11-1.png",
"22-Figure4-1.png",
"26-Table12-1.png",
"27-Table13-1.png",
"30-Figure5-1.png",
"32-Table14-1.png",
"33-Table15-1.png",
"34-Figure6-1.png",
"36-Table16-1.png",
"36-Table17-1.png"
]
} | [
"What are the 12 languages covered?"
] | [
[
"2003.04866-Multi-SimLex: Translation and Annotation-2",
"2003.04866-11-Table1-1.png"
]
] | [
"Chinese Mandarin, Welsh, English, Estonian, Finnish, French, Hebrew, Polish, Russian, Spanish, Kiswahili, Yue Chinese"
] | 237 |
1704.04452 | Bringing Structure into Summaries: Crowdsourcing a Benchmark Corpus of Concept Maps | Concept maps can be used to concisely represent important information and bring structure into large document collections. Therefore, we study a variant of multi-document summarization that produces summaries in the form of concept maps. However, suitable evaluation datasets for this task are currently missing. To close this gap, we present a newly created corpus of concept maps that summarize heterogeneous collections of web documents on educational topics. It was created using a novel crowdsourcing approach that allows us to efficiently determine important elements in large document collections. We release the corpus along with a baseline system and proposed evaluation protocol to enable further research on this variant of summarization. | {
"paragraphs": [
[
"Multi-document summarization (MDS), the transformation of a set of documents into a short text containing their most important aspects, is a long-studied problem in NLP. Generated summaries have been shown to support humans dealing with large document collections in information seeking tasks BIBREF0 , BIBREF1 , BIBREF2 . However, when exploring a set of documents manually, humans rarely write a fully-formulated summary for themselves. Instead, user studies BIBREF3 , BIBREF4 show that they note down important keywords and phrases, try to identify relationships between them and organize them accordingly. Therefore, we believe that the study of summarization with similarly structured outputs is an important extension of the traditional task.",
"A representation that is more in line with observed user behavior is a concept map BIBREF5 , a labeled graph showing concepts as nodes and relationships between them as edges (Figure FIGREF2 ). Introduced in 1972 as a teaching tool BIBREF6 , concept maps have found many applications in education BIBREF7 , BIBREF8 , for writing assistance BIBREF9 or to structure information repositories BIBREF10 , BIBREF11 . For summarization, concept maps make it possible to represent a summary concisely and clearly reveal relationships. Moreover, we see a second interesting use case that goes beyond the capabilities of textual summaries: When concepts and relations are linked to corresponding locations in the documents they have been extracted from, the graph can be used to navigate in a document collection, similar to a table of contents. An implementation of this idea has been recently described by BIBREF12 .",
"The corresponding task that we propose is concept-map-based MDS, the summarization of a document cluster in the form of a concept map. In order to develop and evaluate methods for the task, gold-standard corpora are necessary, but no suitable corpus is available. The manual creation of such a dataset is very time-consuming, as the annotation includes many subtasks. In particular, an annotator would need to manually identify all concepts in the documents, while only a few of them will eventually end up in the summary.",
"To overcome these issues, we present a corpus creation method that effectively combines automatic preprocessing, scalable crowdsourcing and high-quality expert annotations. Using it, we can avoid the high effort for single annotators, allowing us to scale to document clusters that are 15 times larger than in traditional summarization corpora. We created a new corpus of 30 topics, each with around 40 source documents on educational topics and a summarizing concept map that is the consensus of many crowdworkers (see Figure FIGREF3 ).",
"As a crucial step of the corpus creation, we developed a new crowdsourcing scheme called low-context importance annotation. In contrast to traditional approaches, it allows us to determine important elements in a document cluster without requiring annotators to read all documents, making it feasible to crowdsource the task and overcome quality issues observed in previous work BIBREF13 . We show that the approach creates reliable data for our focused summarization scenario and, when tested on traditional summarization corpora, creates annotations that are similar to those obtained by earlier efforts.",
"To summarize, we make the following contributions: (1) We propose a novel task, concept-map-based MDS (§ SECREF2 ), (2) present a new crowdsourcing scheme to create reference summaries (§ SECREF4 ), (3) publish a new dataset for the proposed task (§ SECREF5 ) and (4) provide an evaluation protocol and baseline (§ SECREF7 ). We make these resources publicly available under a permissive license."
],
[
"Concept-map-based MDS is defined as follows: Given a set of related documents, create a concept map that represents its most important content, satisfies a specified size limit and is connected.",
"We define a concept map as a labeled graph showing concepts as nodes and relationships between them as edges. Labels are arbitrary sequences of tokens taken from the documents, making the summarization task extractive. A concept can be an entity, abstract idea, event or activity, designated by its unique label. Good maps should be propositionally coherent, meaning that every relation together with the two connected concepts form a meaningful proposition.",
"The task is complex, consisting of several interdependent subtasks. One has to extract appropriate labels for concepts and relations and recognize different expressions that refer to the same concept across multiple documents. Further, one has to select the most important concepts and relations for the summary and finally organize them in a graph satisfying the connectedness and size constraints."
],
[
"Some attempts have been made to automatically construct concept maps from text, working with either single documents BIBREF14 , BIBREF9 , BIBREF15 , BIBREF16 or document clusters BIBREF17 , BIBREF18 , BIBREF19 . These approaches extract concept and relation labels from syntactic structures and connect them to build a concept map. However, common task definitions and comparable evaluations are missing. In addition, only a few of them, namely Villalon.2012 and Valerio.2006, define summarization as their goal and try to compress the input to a substantially smaller size. Our newly proposed task and the created large-cluster dataset fill these gaps as they emphasize the summarization aspect of the task.",
"For the subtask of selecting summary-worthy concepts and relations, techniques developed for traditional summarization BIBREF20 and keyphrase extraction BIBREF21 are related and applicable. Approaches that build graphs of propositions to create a summary BIBREF22 , BIBREF23 , BIBREF24 , BIBREF25 seem to be particularly related, however, there is one important difference: While they use graphs as an intermediate representation from which a textual summary is then generated, the goal of the proposed task is to create a graph that is directly interpretable and useful for a user. In contrast, these intermediate graphs, e.g. AMR, are hardly useful for a typical, non-linguist user.",
"For traditional summarization, the most well-known datasets emerged out of the DUC and TAC competitions. They provide clusters of news articles with gold-standard summaries. Extending these efforts, several more specialized corpora have been created: With regard to size, Nakano.2010 present a corpus of summaries for large-scale collections of web pages. Recently, corpora with more heterogeneous documents have been suggested, e.g. BIBREF26 and BIBREF27 . The corpus we present combines these aspects, as it has large clusters of heterogeneous documents, and provides a necessary benchmark to evaluate the proposed task.",
"For concept map generation, one corpus with human-created summary concept maps for student essays has been created BIBREF28 . In contrast to our corpus, it only deals with single documents, requires a two orders of magnitude smaller amount of compression of the input and is not publicly available .",
"Other types of information representation that also model concepts and their relationships are knowledge bases, such as Freebase BIBREF29 , and ontologies. However, they both differ in important aspects: Whereas concept maps follow an open label paradigm and are meant to be interpretable by humans, knowledge bases and ontologies are usually more strictly typed and made to be machine-readable. Moreover, approaches to automatically construct them from text typically try to extract as much information as possible, while we want to summarize a document."
],
[
"Lloret.2013 describe several experiments to crowdsource reference summaries. Workers are asked to read 10 documents and then select 10 summary sentences from them for a reward of $0.05. They discovered several challenges, including poor work quality and the subjectiveness of the annotation task, indicating that crowdsourcing is not useful for this purpose.",
"To overcome these issues, we introduce a new task design, low-context importance annotation, to determine summary-worthy parts of documents. Compared to Lloret et al.'s approach, it is more in line with crowdsourcing best practices, as the tasks are simple, intuitive and small BIBREF30 and workers receive reasonable payment BIBREF31 . Most importantly, it is also much more efficient and scalable, as it does not require workers to read all documents in a cluster."
],
[
"We break down the task of importance annotation to the level of single propositions. The goal of our crowdsourcing scheme is to obtain a score for each proposition indicating its importance in a document cluster, such that a ranking according to the score would reveal what is most important and should be included in a summary. In contrast to other work, we do not show the documents to the workers at all, but provide only a description of the document cluster's topic along with the propositions. This ensures that tasks are small, simple and can be done quickly (see Figure FIGREF4 ).",
"In preliminary tests, we found that this design, despite the minimal context, works reasonably on our focused clusters on common educational topics. For instance, consider Figure FIGREF4 : One can easily say that P1 is more important than P2 without reading the documents.",
"We distinguish two task variants:",
"Instead of enforcing binary importance decisions, we use a 5-point Likert-scale to allow more fine-grained annotations. The obtained labels are translated into scores (5..1) and the average of all scores for a proposition is used as an estimate for its importance. This follows the idea that while single workers might find the task subjective, the consensus of multiple workers, represented in the average score, tends to be less subjective due to the “wisdom of the crowd”. We randomly group five propositions into a task.",
"As an alternative, we use a second task design based on pairwise comparisons. Comparisons are known to be easier to make and more consistent BIBREF32 , but also more expensive, as the number of pairs grows quadratically with the number of objects. To reduce the cost, we group five propositions into a task and ask workers to order them by importance per drag-and-drop. From the results, we derive pairwise comparisons and use TrueSkill BIBREF35 , a powerful Bayesian rank induction model BIBREF34 , to obtain importance estimates for each proposition."
],
[
"To verify the proposed approach, we conducted a pilot study on Amazon Mechanical Turk using data from TAC2008 BIBREF36 . We collected importance estimates for 474 propositions extracted from the first three clusters using both task designs. Each Likert-scale task was assigned to 5 different workers and awarded $0.06. For comparison tasks, we also collected 5 labels each, paid $0.05 and sampled around 7% of all possible pairs. We submitted them in batches of 100 pairs and selected pairs for subsequent batches based on the confidence of the TrueSkill model.",
"Following the observations of Lloret.2013, we established several measures for quality control. First, we restricted our tasks to workers from the US with an approval rate of at least 95%. Second, we identified low quality workers by measuring the correlation of each worker's Likert-scores with the average of the other four scores. The worst workers (at most 5% of all labels) were removed.",
"In addition, we included trap sentences, similar as in BIBREF13 , in around 80 of the tasks. In contrast to Lloret et al.'s findings, both an obvious trap sentence (This sentence is not important) and a less obvious but unimportant one (Barack Obama graduated from Harvard Law) were consistently labeled as unimportant (1.08 and 1.14), indicating that the workers did the task properly.",
"For Likert-scale tasks, we follow Snow.2008 and calculate agreement as the average Pearson correlation of a worker's Likert-score with the average score of the remaining workers. This measure is less strict than exact label agreement and can account for close labels and high- or low-scoring workers. We observe a correlation of 0.81, indicating substantial agreement. For comparisons, the majority agreement is 0.73. To further examine the reliability of the collected data, we followed the approach of Kiritchenko.2016 and simply repeated the crowdsourcing for one of the three topics. Between the importance estimates calculated from the first and second run, we found a Pearson correlation of 0.82 (Spearman 0.78) for Likert-scale tasks and 0.69 (Spearman 0.66) for comparison tasks. This shows that the approach, despite the subjectiveness of the task, allows us to collect reliable annotations.",
"In addition to the reliability studies, we extrinsically evaluated the annotations in the task of summary evaluation. For each of the 58 peer summaries in TAC2008, we calculated a score as the sum of the importance estimates of the propositions it contains. Table TABREF13 shows how these peer scores, averaged over the three topics, correlate with the manual responsiveness scores assigned during TAC in comparison to ROUGE-2 and Pyramid scores. The results demonstrate that with both task designs, we obtain importance annotations that are similarly useful for summary evaluation as pyramid annotations or gold-standard summaries (used for ROUGE).",
"Based on the pilot study, we conclude that the proposed crowdsourcing scheme allows us to obtain proper importance annotations for propositions. As workers are not required to read all documents, the annotation is much more efficient and scalable as with traditional methods."
],
[
"This section presents the corpus construction process, as outlined in Figure FIGREF16 , combining automatic preprocessing, scalable crowdsourcing and high-quality expert annotations to be able to scale to the size of our document clusters. For every topic, we spent about $150 on crowdsourcing and 1.5h of expert annotations, while just a single annotator would already need over 8 hours (at 200 words per minute) to read all documents of a topic."
],
[
"As a starting point, we used the DIP corpus BIBREF37 , a collection of 49 clusters of 100 web pages on educational topics (e.g. bullying, homeschooling, drugs) with a short description of each topic. It was created from a large web crawl using state-of-the-art information retrieval. We selected 30 of the topics for which we created the necessary concept map annotations."
],
[
"As concept maps consist of propositions expressing the relation between concepts (see Figure FIGREF2 ), we need to impose such a structure upon the plain text in the document clusters. This could be done by manually annotating spans representing concepts and relations, however, the size of our clusters makes this a huge effort: 2288 sentences per topic (69k in total) need to be processed. Therefore, we resort to an automatic approach.",
"The Open Information Extraction paradigm BIBREF38 offers a representation very similar to the desired one. For instance, from",
"",
"Students with bad credit history should not lose hope and apply for federal loans with the FAFSA.",
"",
"Open IE systems extract tuples of two arguments and a relation phrase representing propositions:",
"",
"(s. with bad credit history, should not lose, hope)",
"(s. with bad credit history, apply for, federal loans with the FAFSA)",
"",
"While the relation phrase is similar to a relation in a concept map, many arguments in these tuples represent useful concepts. We used Open IE 4, a state-of-the-art system BIBREF39 to process all sentences. After removing duplicates, we obtained 4137 tuples per topic.",
"Since we want to create a gold-standard corpus, we have to ensure that we produce high-quality data. We therefore made use of the confidence assigned to every extracted tuple to filter out low quality ones. To ensure that we do not filter too aggressively (and miss important aspects in the final summary), we manually annotated 500 tuples sampled from all topics for correctness. On the first 250 of them, we tuned the filter threshold to 0.5, which keeps 98.7% of the correct extractions in the unseen second half. After filtering, a topic had on average 2850 propositions (85k in total)."
],
[
"Despite the similarity of the Open IE paradigm, not every extracted tuple is a suitable proposition for a concept map. To reduce the effort in the subsequent steps, we therefore want to filter out unsuitable ones. A tuple is suitable if it (1) is a correct extraction, (2) is meaningful without any context and (3) has arguments that represent proper concepts. We created a guideline explaining when to label a tuple as suitable for a concept map and performed a small annotation study. Three annotators independently labeled 500 randomly sampled tuples. The agreement was 82% ( INLINEFORM0 ). We found tuples to be unsuitable mostly because they had unresolvable pronouns, conflicting with (2), or arguments that were full clauses or propositions, conflicting with (3), while (1) was mostly taken care of by the confidence filtering in § SECREF21 .",
"Due to the high number of tuples we decided to automate the filtering step. We trained a linear SVM on the majority voted annotations. As features, we used the extraction confidence, length of arguments and relations as well as part-of-speech tags, among others. To ensure that the automatic classification does not remove suitable propositions, we tuned the classifier to avoid false negatives. In particular, we introduced class weights, improving precision on the negative class at the cost of a higher fraction of positive classifications. Additionally, we manually verified a certain number of the most uncertain negative classifications to further improve performance. When 20% of the classifications are manually verified and corrected, we found that our model trained on 350 labeled instances achieves 93% precision on negative classifications on the unseen 150 instances. We found this to be a reasonable trade-off of automation and data quality and applied the model to the full dataset.",
"The classifier filtered out 43% of the propositions, leaving 1622 per topic. We manually examined the 17k least confident negative classifications and corrected 955 of them. We also corrected positive classifications for certain types of tuples for which we knew the classifier to be imprecise. Finally, each topic was left with an average of 1554 propositions (47k in total)."
],
[
"Given the propositions identified in the previous step, we now applied our crowdsourcing scheme as described in § SECREF4 to determine their importance. To cope with the large number of propositions, we combine the two task designs: First, we collect Likert-scores from 5 workers for each proposition, clean the data and calculate average scores. Then, using only the top 100 propositions according to these scores, we crowdsource 10% of all possible pairwise comparisons among them. Using TrueSkill, we obtain a fine-grained ranking of the 100 most important propositions.",
"For Likert-scores, the average agreement over all topics is 0.80, while the majority agreement for comparisons is 0.78. We repeated the data collection for three randomly selected topics and found the Pearson correlation between both runs to be 0.73 (Spearman 0.73) for Likert-scores and 0.72 (Spearman 0.71) for comparisons. These figures show that the crowdsourcing approach works on this dataset as reliably as on the TAC documents.",
"In total, we uploaded 53k scoring and 12k comparison tasks to Mechanical Turk, spending $4425.45 including fees. From the fine-grained ranking of the 100 most important propositions, we select the top 50 per topic to construct a summary concept map in the subsequent steps."
],
[
"Having a manageable number of propositions, an annotator then applied a few straightforward transformations that correct common errors of the Open IE system. First, we break down propositions with conjunctions in either of the arguments into separate propositions per conjunct, which the Open IE system sometimes fails to do. And second, we correct span errors that might occur in the argument or relation phrases, especially when sentences were not properly segmented. As a result, we have a set of high quality propositions for our concept map, consisting of, due to the first transformation, 56.1 propositions per topic on average."
],
[
"In this final step, we connect the set of important propositions to form a graph. For instance, given the following two propositions",
"",
"(student, may borrow, Stafford Loan)",
"(the student, does not have, a credit history)",
"",
"",
"one can easily see, although the first arguments differ slightly, that both labels describe the concept student, allowing us to build a concept map with the concepts student, Stafford Loan and credit history. The annotation task thus involves deciding which of the available propositions to include in the map, which of their concepts to merge and, when merging, which of the available labels to use. As these decisions highly depend upon each other and require context, we decided to use expert annotators rather than crowdsource the subtasks.",
"Annotators were given the topic description and the most important, ranked propositions. Using a simple annotation tool providing a visualization of the graph, they could connect the propositions step by step. They were instructed to reach a size of 25 concepts, the recommended maximum size for a concept map BIBREF6 . Further, they should prefer more important propositions and ensure connectedness. When connecting two propositions, they were asked to keep the concept label that was appropriate for both propositions. To support the annotators, the tool used ADW BIBREF40 , a state-of-the-art approach for semantic similarity, to suggest possible connections. The annotation was carried out by graduate students with a background in NLP after receiving an introduction into the guidelines and tool and annotating a first example.",
"If an annotator was not able to connect 25 concepts, she was allowed to create up to three synthetic relations with freely defined labels, making the maps slightly abstractive. On average, the constructed maps have 0.77 synthetic relations, mostly connecting concepts whose relation is too obvious to be explicitly stated in text (e.g. between Montessori teacher and Montessori education).",
"To assess the reliability of this annotation step, we had the first three maps created by two annotators. We casted the task of selecting propositions to be included in the map as a binary decision task and observed an agreement of 84% ( INLINEFORM0 ). Second, we modeled the decision which concepts to join as a binary decision on all pairs of common concepts, observing an agreement of 95% ( INLINEFORM1 ). And finally, we compared which concept labels the annotators decided to include in the final map, observing 85% ( INLINEFORM2 ) agreement. Hence, the annotation shows substantial agreement BIBREF41 ."
],
[
"In this section, we describe our newly created corpus, which, in addition to having summaries in the form of concept maps, differs from traditional summarization corpora in several aspects."
],
[
"The corpus consists of document clusters for 30 different topics. Each of them contains around 40 documents with on average 2413 tokens, which leads to an average cluster size of 97,880 token. With these characteristics, the document clusters are 15 times larger than typical DUC clusters of ten documents and five times larger than the 25-document-clusters (Table TABREF26 ). In addition, the documents are also more variable in terms of length, as the (length-adjusted) standard deviation is twice as high as in the other corpora. With these properties, the corpus represents an interesting challenge towards real-world application scenarios, in which users typically have to deal with much more than ten documents.",
"Because we used a large web crawl as the source for our corpus, it contains documents from a variety of genres. To further analyze this property, we categorized a sample of 50 documents from the corpus. Among them, we found professionally written articles and blog posts (28%), educational material for parents and kids (26%), personal blog posts (16%), forum discussions and comments (12%), commented link collections (12%) and scientific articles (6%).",
"In addition to the variety of genres, the documents also differ in terms of language use. To capture this property, we follow Zopf.2016 and compute, for every topic, the average Jensen-Shannon divergence between the word distribution of one document and the word distribution in the remaining documents. The higher this value is, the more the language differs between documents. We found the average divergence over all topics to be 0.3490, whereas it is 0.3019 in DUC 2004 and 0.3188 in TAC 2008A."
],
[
"As Table TABREF33 shows, each of the 30 reference concept maps has exactly 25 concepts and between 24 and 28 relations. Labels for both concepts and relations consist on average of 3.2 tokens, whereas the latter are a bit shorter in characters.",
"To obtain a better picture of what kind of text spans have been used as labels, we automatically tagged them with their part-of-speech and determined their head with a dependency parser. Concept labels tend to be headed by nouns (82%) or verbs (15%), while they also contain adjectives, prepositions and determiners. Relation labels, on the other hand, are almost always headed by a verb (94%) and contain prepositions, nouns and particles in addition. These distributions are very similar to those reported by Villalon.2010 for their (single-document) concept map corpus.",
"Analyzing the graph structure of the maps, we found that all of them are connected. They have on average 7.2 central concepts with more than one relation, while the remaining ones occur in only one proposition. We found that achieving a higher number of connections would mean compromising importance, i.e. including less important propositions, and decided against it."
],
[
"In this section, we briefly describe a baseline and evaluation scripts that we release, with a detailed documentation, along with the corpus."
],
[
"In this work, we presented low-context importance annotation, a novel crowdsourcing scheme that we used to create a new benchmark corpus for concept-map-based MDS. The corpus has large-scale document clusters of heterogeneous web documents, posing a challenging summarization task. Together with the corpus, we provide implementations of a baseline method and evaluation scripts and hope that our efforts facilitate future research on this variant of summarization."
],
[
"We would like to thank Teresa Botschen, Andreas Hanselowski and Markus Zopf for their help with the annotation work and Christian Meyer for his valuable feedback. This work has been supported by the German Research Foundation as part of the Research Training Group “Adaptive Preparation of Information from Heterogeneous Sources” (AIPHES) under grant No. GRK 1994/1."
]
],
"section_name": [
"Introduction",
"Task",
"Related Work",
"Low-Context Importance Annotation",
"Task Design",
"Pilot Study",
"Corpus Creation",
"Source Data",
"Proposition Extraction",
"Proposition Filtering",
"Importance Annotation",
"Proposition Revision",
"Concept Map Construction",
"Corpus Analysis",
"Document Clusters",
"Concept Maps",
"Baseline Experiments",
"Conclusion",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"1c705e97305e72ed2f01fb62257eaf1c2b02ce5e",
"544e18f90c478925ad5864b553603b6935e32e7c",
"b65c315516642a6e5ad5621f5d5cd45c19e90b96"
],
"answer": [
{
"evidence": [
"To overcome these issues, we present a corpus creation method that effectively combines automatic preprocessing, scalable crowdsourcing and high-quality expert annotations. Using it, we can avoid the high effort for single annotators, allowing us to scale to document clusters that are 15 times larger than in traditional summarization corpora. We created a new corpus of 30 topics, each with around 40 source documents on educational topics and a summarizing concept map that is the consensus of many crowdworkers (see Figure FIGREF3 ).",
"FLOAT SELECTED: Figure 2: Excerpt from a summary concept map on the topic “students loans without credit history”."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"We created a new corpus of 30 topics, each with around 40 source documents on educational topics and a summarizing concept map that is the consensus of many crowdworkers (see Figure FIGREF3 ).",
"FLOAT SELECTED: Figure 2: Excerpt from a summary concept map on the topic “students loans without credit history”."
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"552d0b9a1ff1417d5aa072e8799187bea5896b10"
],
"answer": [
{
"evidence": [
"Baseline Experiments",
"In this section, we briefly describe a baseline and evaluation scripts that we release, with a detailed documentation, along with the corpus."
],
"extractive_spans": [],
"free_form_answer": "Answer with content missing: (Evaluation Metrics section) Precision, Recall, F1-scores, Strict match, METEOR, ROUGE-2",
"highlighted_evidence": [
"Baseline Experiments\nIn this section, we briefly describe a baseline and evaluation scripts that we release, with a detailed documentation, along with the corpus."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"674a69fb7c46d83e5be4282ef74c161b3ef71ed1"
],
"answer": [
{
"evidence": [
"Baseline Experiments",
"In this section, we briefly describe a baseline and evaluation scripts that we release, with a detailed documentation, along with the corpus."
],
"extractive_spans": [],
"free_form_answer": "Answer with content missing: (Baseline Method section) We implemented a simple approach inspired by previous work on concept map generation and keyphrase extraction.",
"highlighted_evidence": [
"Baseline Experiments\nIn this section, we briefly describe a baseline and evaluation scripts that we release, with a detailed documentation, along with the corpus."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"71f740d60089847be494cc3e30f1b2238236bf5c",
"ae126482811e1850609fee8fa05e903c67be5089"
],
"answer": [
{
"evidence": [
"We break down the task of importance annotation to the level of single propositions. The goal of our crowdsourcing scheme is to obtain a score for each proposition indicating its importance in a document cluster, such that a ranking according to the score would reveal what is most important and should be included in a summary. In contrast to other work, we do not show the documents to the workers at all, but provide only a description of the document cluster's topic along with the propositions. This ensures that tasks are small, simple and can be done quickly (see Figure FIGREF4 )."
],
"extractive_spans": [
"provide only a description of the document cluster's topic along with the propositions"
],
"free_form_answer": "",
"highlighted_evidence": [
"In contrast to other work, we do not show the documents to the workers at all, but provide only a description of the document cluster's topic along with the propositions. This ensures that tasks are small, simple and can be done quickly (see Figure FIGREF4 )."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We break down the task of importance annotation to the level of single propositions. The goal of our crowdsourcing scheme is to obtain a score for each proposition indicating its importance in a document cluster, such that a ranking according to the score would reveal what is most important and should be included in a summary. In contrast to other work, we do not show the documents to the workers at all, but provide only a description of the document cluster's topic along with the propositions. This ensures that tasks are small, simple and can be done quickly (see Figure FIGREF4 )."
],
"extractive_spans": [],
"free_form_answer": "They break down the task of importance annotation to the level of single propositions and obtain a score for each proposition indicating its importance in a document cluster, such that a ranking according to the score would reveal what is most important and should be included in a summary.",
"highlighted_evidence": [
"We break down the task of importance annotation to the level of single propositions. The goal of our crowdsourcing scheme is to obtain a score for each proposition indicating its importance in a document cluster, such that a ranking according to the score would reveal what is most important and should be included in a summary. In contrast to other work, we do not show the documents to the workers at all, but provide only a description of the document cluster's topic along with the propositions. This ensures that tasks are small, simple and can be done quickly (see Figure FIGREF4 )."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"a25b36b0a9d801fce1688c675c15708847e3cd14"
],
"answer": [
{
"evidence": [
"As a starting point, we used the DIP corpus BIBREF37 , a collection of 49 clusters of 100 web pages on educational topics (e.g. bullying, homeschooling, drugs) with a short description of each topic. It was created from a large web crawl using state-of-the-art information retrieval. We selected 30 of the topics for which we created the necessary concept map annotations."
],
"extractive_spans": [
"DIP corpus BIBREF37"
],
"free_form_answer": "",
"highlighted_evidence": [
"As a starting point, we used the DIP corpus BIBREF37 , a collection of 49 clusters of 100 web pages on educational topics (e.g. bullying, homeschooling, drugs) with a short description of each topic."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"dd8961cb5cc8da505cc840b9f90989bb254dd39f"
],
"answer": [
{
"evidence": [
"A representation that is more in line with observed user behavior is a concept map BIBREF5 , a labeled graph showing concepts as nodes and relationships between them as edges (Figure FIGREF2 ). Introduced in 1972 as a teaching tool BIBREF6 , concept maps have found many applications in education BIBREF7 , BIBREF8 , for writing assistance BIBREF9 or to structure information repositories BIBREF10 , BIBREF11 . For summarization, concept maps make it possible to represent a summary concisely and clearly reveal relationships. Moreover, we see a second interesting use case that goes beyond the capabilities of textual summaries: When concepts and relations are linked to corresponding locations in the documents they have been extracted from, the graph can be used to navigate in a document collection, similar to a table of contents. An implementation of this idea has been recently described by BIBREF12 ."
],
"extractive_spans": [
"concept map BIBREF5 , a labeled graph showing concepts as nodes and relationships between them as edges"
],
"free_form_answer": "",
"highlighted_evidence": [
"A representation that is more in line with observed user behavior is a concept map BIBREF5 , a labeled graph showing concepts as nodes and relationships between them as edges (Figure FIGREF2 )."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five",
"five",
"five"
],
"paper_read": [
"",
"",
"",
"",
"",
""
],
"question": [
"Does the corpus contain only English documents?",
"What type of evaluation is proposed for this task?",
"What baseline system is proposed?",
"How were crowd workers instructed to identify important elements in large document collections?",
"Which collections of web documents are included in the corpus?",
"How do the authors define a concept map?"
],
"question_id": [
"8e44c02c2d9fa56fb74ace35ee70a5add50b52ae",
"1522ccedbb1f668958f24cca070f640274bc2549",
"97466a37525536086ed5d6e5ed143df085682318",
"e9cfbfdf30e48cffdeca58d4ac6fdd66a8b27d7a",
"d6191c4643201262a770947fc95a613f57bedb6b",
"ffeb67a61ecd09542b1c53c3e4c3abd4da0496a8"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"search_query": [
"",
"",
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
"",
"",
""
]
} | {
"caption": [
"Figure 1: Elements of a concept map.",
"Figure 2: Excerpt from a summary concept map on the topic “students loans without credit history”.",
"Figure 3: Likert-scale crowdsourcing task with topic description and two example propositions.",
"Table 1: Correlation of peer scores with manual responsiveness scores on TAC2008 topics 01-03.",
"Figure 4: Steps of the corpus creation (with references to the corresponding sections).",
"Table 2: Topic clusters in comparison to classic corpora (size in token, mean with standard deviation).",
"Table 3: Size of concept maps (mean with std).",
"Table 4: Baseline performance on test set."
],
"file": [
"1-Figure1-1.png",
"2-Figure2-1.png",
"3-Figure3-1.png",
"4-Table1-1.png",
"5-Figure4-1.png",
"7-Table2-1.png",
"7-Table3-1.png",
"8-Table4-1.png"
]
} | [
"What type of evaluation is proposed for this task?",
"What baseline system is proposed?",
"How were crowd workers instructed to identify important elements in large document collections?"
] | [
[
"1704.04452-Baseline Experiments-0"
],
[
"1704.04452-Baseline Experiments-0"
],
[
"1704.04452-Task Design-0"
]
] | [
"Answer with content missing: (Evaluation Metrics section) Precision, Recall, F1-scores, Strict match, METEOR, ROUGE-2",
"Answer with content missing: (Baseline Method section) We implemented a simple approach inspired by previous work on concept map generation and keyphrase extraction.",
"They break down the task of importance annotation to the level of single propositions and obtain a score for each proposition indicating its importance in a document cluster, such that a ranking according to the score would reveal what is most important and should be included in a summary."
] | 238 |
2003.11562 | Finnish Language Modeling with Deep Transformer Models | Transformers have recently taken the center stage in language modeling after LSTM's were considered the dominant model architecture for a long time. In this project, we investigate the performance of the Transformer architectures-BERT and Transformer-XL for the language modeling task. We use a sub-word model setting with the Finnish language and compare it to the previous State of the art (SOTA) LSTM model. BERT achieves a pseudo-perplexity score of 14.5, which is the first such measure achieved as far as we know. Transformer-XL improves upon the perplexity score to 73.58 which is 27\% better than the LSTM model. | {
"paragraphs": [
[
"Language modeling is a probabilistic description of language phenomenon. It provides essential context to distinguish words which sound similar and therefore has one of the most useful applications in Natural Language Processing (NLP) especially in downstreaming tasks like Automatic Speech Recognition (ASR). Recurrent Neural Networks (RNN) especially Long Short Term Memory (LSTM) networks BIBREF0 have been the typical solution to language modeling which do achieve strong results. In spite of these results, their fundamental sequential computation constraint has restricted their use in the modeling of long-term dependencies in sequential data. To address these issues Transformer architecture was introduced. Transformers relies completely on an attention mechanism to form global dependencies between input and output. It also offers more parallelization and has achieved SOTA results in language modeling outperforming LSTM models BIBREF1.",
"In recent years,we have seen a lot of development based on this standard transformer models particularly on unsupervised pre-training(BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7 which have set state-of-the art results on multiple NLP benchmarks. One such model architecture has been the Bidirectional Encoder Representations from Transformers (BERT) model which uses a deep bidirectional transformer architecture.",
"Another architecture of interest would be the Transformer-XL, which introduces the notion of recurrence in a self-attention model.",
"The primary research focus though has been mostly on English language for which abundant data is present. It is interesting to see the performance of these models for an agglutinative language like Finnish, which is morphologically richer than English.",
"In this project, we explore the implementation of Transformer-based models (BERT and Transformer-XL) in language modeling for Finnish. We will use the same training data as in BIBREF8 so that we can do fair comparisons with the performance of the LSTM models. Also, as the BERT model is a bi-directional transformer, we will have to approximate the conditional probabilities given a sequence of words. We also experiment with using sub-word units with Transformer-XL to cope with the large vocabulary problems associated with the Finnish Language. With smaller units, the modeled sequences are longer, and we hope that the recursive XL architecture can allow us to still model long term effects. To the best of our knowledge this is the first work with the Finnish language to use the following:",
"Approximation of perplexity using a BERT architecture",
"Using Transformer-XL architecture with sub-word units.",
"Comparison of Transformer and LSTM models as language models in the same comparable settings with an agglutinative language."
],
[
"The goal of an language model is to assign meaningful probabilities to a sequence of words. Given a set of tokens $\\mathbf {X}=(x_1,....,x_T)$, where $T$ is the length of a sequence, our task is to estimate the joint conditional probability $P(\\mathbf {X})$ which is",
"were $(x_{1}, \\ldots , x_{i-1})$ is the context. An Intrinsic evaluation of the performance of Language Models is perplexity (PPL) which is defined as the inverse probability of the set of the tokens and taking the $T^{th}$ root were $T$ is the number of tokens",
"In our two approaches we use transformer based architectures: BERT and Transformer-XL as mentioned before. Calculating the auto-regressive $P(\\mathbf {X})$ for the transformer-XL is quite straight-forward as the model is unidirectional but it doesn't factorize the same way for a bi-directional model like BERT.",
"BERT's bi-directional context poses a problem for us to calculate an auto-regressive joint probability. A simple fix could be that we mask all the tokens $\\mathbf {x}_{>i}$ and we calculate the conditional factors as we do for an unidirectional model. By doing so though, we loose upon the advantage of bi-directional context the BERT model enables. We propose an approximation of the joint probability as,",
"This type of approximations has been previously explored with Bi-directional RNN LM's BIBREF9 but not for deep transformer models. We therefore, define a pseudo-perplexity score from the above approximated joint probability.",
"The original BERT has two training objectives: 'Masked language modelling', in which you mask input tokens randomly and then predict the masked tokens using the left and right context. Additionally, there is the 'next sentence prediction' task that jointly trains text-pair representations. For training the Masked language model the original BERT used Byte Pair Encoding (BPE) BIBREF10 for subword tokenization BIBREF11.For example the rare word \"unaffable\" to be split up into more frequent subwords such as [\"un\", \"##aff\", \"##able\"]. To remain consistent with experiments performed with LSTM's we use the morfessor for the subword tokenization in the Finnish Language. In Addition, we also apply boundary markers as in (Table TABREF7) and train two separate models using this distinction. We train with left-marked markings as the original BERT was trained with such a scheme and the left+right-marked as it was the previous SOTA with the Finnish Language. For the transformer-XL experiments, we just train with the left+right marked scheme.",
"The Next Sentence Prediction (NSP) is a binary classification task which predicts whether two segments follow each other in the original text. This pre-training task was proposed to further improve the performance on downstreaming tasks, like Natural Language Inference(NLI) but in reality removing the NSP loss matches or slightly improves the downstream task performance BIBREF12. In this paper, we have omitted the NSP task from the BERT pre-training procedure and changed the input from a SEGMENT-PAIR input to a SINGLE SEGMENT input. As seen in (Fig FIGREF8)",
"Transformer-XL introduced the notion of recurrence in self-attention by caching the hidden state sequence to compute the hidden states of a new segment. It also introduces a novel relative positional embedding scheme and both of them combined address the issue of fixed context lengths. Transformer-XL as mentioned is a unidirectional deep transformer architecture, therefore the perplexity can be calculated as (Eq DISPLAY_FORM5). The only change is in the input format, were we use sub-word units rather than whole word units as Finnish is morphologically richer than English."
],
[
"The Finnish text data used for the language modeling task is provided by BIBREF13. The dataset consists mainly of newspapers and books of around 144 million word tokens and 4.2 million unique tokens. We use a Morfessor 2.0 BIBREF14 using the basic unsupervised Morfessor Baseline algorithm BIBREF15 with a corpus weight parameter ($\\alpha $) of 0.001. We have a vocabulary of 34K subword tokens for the left+right-marked (+m+) markings and 19K subword tokens for the left-marked (+m) markings. We also pre-process the data to remove any punctuation marks such that we can use the same data with an ASR system. The input is one sentence per line and we shuffle the sentences at each epoch. The data is randomly divided into- training dataset and a validation dataset. The test dataset consists of 2850 Finnish news articles obtained from the Finnish national broadcaster YLE."
],
[
"All BERT experiments were trained for 500K steps. The code was written in Python and we used the Tensorflow libraries to create the models. The experiments were trained on a single NVIDIA Tesla V100 32 GB graphic card. The data was first processed into Tensorflow records as the input to the model. The set of hyperparameters which we found optimal after experimenting with different sets are in (Table TABREF10).",
"This set of parameters were chosen as there training performances were better than smaller models on modelling the long sequences of sub-words. We use the Adam optimizer BIBREF16 same as the English BERT. A maximum sequence length of 300 encompasses 98 percent of the training data and also allows us to fit larger models on the GPU card. Hyper-parameter optimization is very difficult in case of these models as they take around 15 days to train given the resources. The hyperparameter choices were therefore more dependant on the original BERT with little tweaks. We assess the training performance of the the model in the (Table TABREF11).",
"When we train the BERT model we mask some percentage of the input tokens at random, and then predict those masked tokens, this is known as Masked LM. The masked LM loss, refers specifically to the loss when the masked language model predicts on the masked tokens. The masked LM accuracy refers specifically to the accuracy with which the model predicts on the masked tokens. The loss for both the models are far off from the Masked LM loss of the English BERT, key difference being the pre-training data for both the language models are quite different. Google training their model on 3.3 Billion words from BooksCorpus BIBREF17 and the English Wikipedia and our model being trained on 144 million words. Comparing the two Finnish models, the left-marked model has a better training performance than left+right-marked model.",
"The results of the pseudo-perplexity described in the previous section to evaluate the above models on the test data-set is in table (Table TABREF12).The test dataset is of a different context when compared to the training data, and interestingly both the models are quite confident when it comes to the test dataset. The pseudo-perplexity values of left-marked are lower when compared to left-right-marked signifying that it is more confident.",
"We cannot directly compare the perplexity scores BERT model with a unidirectional LSTM model as both are calculated in a different manner. We can experiment to compare it with a Bi-directional LSTM or use a downstreaming task to compare both the performances. We could also randomly mask tokens and then compare the prediction accuracy on the masked tokens."
],
[
"All Transformer-XL experiments are also trained equally for 500K steps. The code was written in Python and we used the PyTorch libraries for model creation. The experiments were trained on a single NVIDIA Tesla V100 32 GB graphic card. Two sets of hyperparameters were chosen to be compared after some initial optimization and are in (Table TABREF14)",
"From the above parameter choice, we wanted to experiment, whether providing more Segment and Memory length is advantageous (longer context) than a larger model. These parameters where chosen after some hyperparameter optimization. Same as for BERT we use the Adam optimizer, but we also use a cosine annealing learning rate scheduler to speed-up training BIBREF18. The training performance results are in (Table TABREF15)",
"As opposed to BERT, the left+right-marked models have a better training performance than their counterpart. Interestingly the larger model trains much better when compared to providing larger contexts. The same set of parameters for the 32-32 model cannot be replicated for 150-150 model as the latter takes a lot of space on the GPU card. The test set is same as that used with BERT and the results are in (Table TABREF16). The test performance is similar to that of the training performance with left-right-marked large model(32-32) performing the best. We can directly compare the perplexity scores with the previous best BIBREF19 as both are unidirectional models, Transformer-XL model has outperformed the latter by 27%."
],
[
"Transformer-XL and BERT both have low perplexity and pseudo-perplexity scores, but both cannot be directly compared as they are calculated quite differently (Eq.DISPLAY_FORM4, Eq.DISPLAY_FORM6). The dramatically low scores of BERT indicate that per word predicted probability is higher than that of a uni-directional model. Thus the predicted word probability distribution is much sharper when compared to the XL model probability distribution. At this point, we cannot say which model architecture has performed better- BERT or Transformer-XL, despite both of them achieving good low perplexity scores. We would need to experiment with a downstreaming task in-order to fairly compare model performances."
],
[
"Recent migration to transformer based architectures in language modeling from LSTM models is justified as Transformer-XL obtains strong perplexity results. BERT model also obtains very low pseudo-perplexity scores but it is inequitable to the unidirectional models. Our major contributions in this project, is the use of Transformer-XL architectures for the Finnish language in a sub-word setting, and the formulation of pseudo perplexity for the BERT model. Further comparisons between the transformer architectures can be made by downstreaming it to an ASR task, which will be explored in the future."
]
],
"section_name": [
"Introduction",
"Background & Methods",
"Data",
"Experiments & Results ::: BERT",
"Experiments & Results ::: Transformer-XL",
"Experiments & Results ::: Result comparisons for Transformer architectures",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"1fd0018ab5fe1179013f70c2f19b092f27e6ead3",
"dd480f69bb2f074ba4c6a715f46ee4e3101b9c15"
],
"answer": [
{
"evidence": [
"The original BERT has two training objectives: 'Masked language modelling', in which you mask input tokens randomly and then predict the masked tokens using the left and right context. Additionally, there is the 'next sentence prediction' task that jointly trains text-pair representations. For training the Masked language model the original BERT used Byte Pair Encoding (BPE) BIBREF10 for subword tokenization BIBREF11.For example the rare word \"unaffable\" to be split up into more frequent subwords such as [\"un\", \"##aff\", \"##able\"]. To remain consistent with experiments performed with LSTM's we use the morfessor for the subword tokenization in the Finnish Language. In Addition, we also apply boundary markers as in (Table TABREF7) and train two separate models using this distinction. We train with left-marked markings as the original BERT was trained with such a scheme and the left+right-marked as it was the previous SOTA with the Finnish Language. For the transformer-XL experiments, we just train with the left+right marked scheme."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
" To remain consistent with experiments performed with LSTM's we use the morfessor for the subword tokenization in the Finnish Language."
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"The original BERT has two training objectives: 'Masked language modelling', in which you mask input tokens randomly and then predict the masked tokens using the left and right context. Additionally, there is the 'next sentence prediction' task that jointly trains text-pair representations. For training the Masked language model the original BERT used Byte Pair Encoding (BPE) BIBREF10 for subword tokenization BIBREF11.For example the rare word \"unaffable\" to be split up into more frequent subwords such as [\"un\", \"##aff\", \"##able\"]. To remain consistent with experiments performed with LSTM's we use the morfessor for the subword tokenization in the Finnish Language. In Addition, we also apply boundary markers as in (Table TABREF7) and train two separate models using this distinction. We train with left-marked markings as the original BERT was trained with such a scheme and the left+right-marked as it was the previous SOTA with the Finnish Language. For the transformer-XL experiments, we just train with the left+right marked scheme."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"To remain consistent with experiments performed with LSTM's we use the morfessor for the subword tokenization in the Finnish Language. "
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c"
]
},
{
"annotation_id": [
"5f8e6dfa65ee1ba3524e7db14c1565c074b843be"
],
"answer": [
{
"evidence": [
"The goal of an language model is to assign meaningful probabilities to a sequence of words. Given a set of tokens $\\mathbf {X}=(x_1,....,x_T)$, where $T$ is the length of a sequence, our task is to estimate the joint conditional probability $P(\\mathbf {X})$ which is",
"were $(x_{1}, \\ldots , x_{i-1})$ is the context. An Intrinsic evaluation of the performance of Language Models is perplexity (PPL) which is defined as the inverse probability of the set of the tokens and taking the $T^{th}$ root were $T$ is the number of tokens",
"BERT's bi-directional context poses a problem for us to calculate an auto-regressive joint probability. A simple fix could be that we mask all the tokens $\\mathbf {x}_{>i}$ and we calculate the conditional factors as we do for an unidirectional model. By doing so though, we loose upon the advantage of bi-directional context the BERT model enables. We propose an approximation of the joint probability as,",
"This type of approximations has been previously explored with Bi-directional RNN LM's BIBREF9 but not for deep transformer models. We therefore, define a pseudo-perplexity score from the above approximated joint probability."
],
"extractive_spans": [],
"free_form_answer": "Answer with content missing: (formulas in selection): Pseudo-perplexity is perplexity where conditional joint probability is approximated.",
"highlighted_evidence": [
"The goal of an language model is to assign meaningful probabilities to a sequence of words. Given a set of tokens $\\mathbf {X}=(x_1,....,x_T)$, where $T$ is the length of a sequence, our task is to estimate the joint conditional probability $P(\\mathbf {X})$ which is\n\nwere $(x_{1}, \\ldots , x_{i-1})$ is the context. An Intrinsic evaluation of the performance of Language Models is perplexity (PPL) which is defined as the inverse probability of the set of the tokens and taking the $T^{th}$ root were $T$ is the number of tokens",
"We propose an approximation of the joint probability as,\n\nThis type of approximations has been previously explored with Bi-directional RNN LM's BIBREF9 but not for deep transformer models. We therefore, define a pseudo-perplexity score from the above approximated joint probability."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"infinity",
"infinity"
],
"paper_read": [
"no",
"no"
],
"question": [
"Is the LSTM baseline a sub-word model?",
"How is pseudo-perplexity defined?"
],
"question_id": [
"fc4ae12576ea3a85ea6d150b46938890d63a7d18",
"19cf7884c0c509c189b1e74fe92c149ff59e444b"
],
"question_writer": [
"fa716cd87ce6fd6905e2f23f09b262e90413167f",
"fa716cd87ce6fd6905e2f23f09b262e90413167f"
],
"search_query": [
"",
""
],
"topic_background": [
"familiar",
"familiar"
]
} | {
"caption": [
"TABLE II BERT HYPERPARAMETERS",
"TABLE I TWO METHODS OF MARKING SUBWORD UNITS SUCH THAT THE ORIGINAL SENTENCE ’TWO SLIPPERS’ IS RECONSTRUCTED",
"Fig. 1. BERT-Original sentence ’how are you doing today’",
"TABLE V TR-XL HYPERPARAMETERS",
"TABLE III BERT TRAINING PERFORMANCE",
"TABLE VI TR-XL TRAINING PERPLEXITY SCORES",
"TABLE VII TR-XL TEST PERPLEXITY SCORES, (-): THE EXPERIMENT MODELS ARE NOT AVAILABLE"
],
"file": [
"2-TableII-1.png",
"2-TableI-1.png",
"3-Figure1-1.png",
"3-TableV-1.png",
"3-TableIII-1.png",
"4-TableVI-1.png",
"4-TableVII-1.png"
]
} | [
"How is pseudo-perplexity defined?"
] | [
[
"2003.11562-Background & Methods-1",
"2003.11562-Background & Methods-3",
"2003.11562-Background & Methods-4",
"2003.11562-Background & Methods-0"
]
] | [
"Answer with content missing: (formulas in selection): Pseudo-perplexity is perplexity where conditional joint probability is approximated."
] | 239 |
1608.08188 | Visual Question: Predicting If a Crowd Will Agree on the Answer | Visual question answering (VQA) systems are emerging from a desire to empower users to ask any natural language question about visual content and receive a valid answer in response. However, close examination of the VQA problem reveals an unavoidable, entangled problem that multiple humans may or may not always agree on a single answer to a visual question. We train a model to automatically predict from a visual question whether a crowd would agree on a single answer. We then propose how to exploit this system in a novel application to efficiently allocate human effort to collect answers to visual questions. Specifically, we propose a crowdsourcing system that automatically solicits fewer human responses when answer agreement is expected and more human responses when answer disagreement is expected. Our system improves upon existing crowdsourcing systems, typically eliminating at least 20% of human effort with no loss to the information collected from the crowd. | {
"paragraphs": [
[
"What would be possible if a person had an oracle that could immediately provide the answer to any question about the visual world? Sight-impaired users could quickly and reliably figure out the denomination of their currency and so whether they spent the appropriate amount for a product BIBREF0 . Hikers could immediately learn about their bug bites and whether to seek out emergency medical care. Pilots could learn how many birds are in their path to decide whether to change course and so avoid costly, life-threatening collisions. These examples illustrate several of the interests from a visual question answering (VQA) system, including tackling problems that involve classification, detection, and counting. More generally, the goal for VQA is to have a single system that can accurately answer any natural language question about an image or video BIBREF1 , BIBREF2 , BIBREF3 .",
"Entangled in the dream of a VQA system is an unavoidable issue that, when asking multiple people a visual question, sometimes they all agree on a single answer while other times they offer different answers (Figure FIGREF1 ). In fact, as we show in the paper, these two outcomes arise in approximately equal proportions in today's largest publicly-shared VQA benchmark that contains over 450,000 visual questions. Figure FIGREF1 illustrates that human disagreements arise for a variety of reasons including different descriptions of the same concept (e.g., “minor\" and “underage\"), different concepts (e.g., “ghost\" and “photoshop\"), and irrelevant responses (e.g., “no\").",
"Our goal is to account for whether different people would agree on a single answer to a visual question to improve upon today's VQA systems. We propose multiple prediction systems to automatically decide whether a visual question will lead to human agreement and demonstrate the value of these predictions for a new task of capturing the diversity of all plausible answers with less human effort.",
"Our work is partially inspired by the goal to improve how to employ crowds as the computing power at run-time. Towards satisfying existing users, gaining new users, and supporting a wide range of applications, a crowd-powered VQA system should be low cost, have fast response times, and yield high quality answers. Today's status quo is to assume a fixed number of human responses per visual question and so a fixed cost, delay, and potential diversity of answers for every visual question BIBREF2 , BIBREF0 , BIBREF4 . We instead propose to dynamically solicit the number of human responses based on each visual question. In particular, we aim to accrue additional costs and delays from collecting extra answers only when extra responses are needed to discover all plausible answers. We show in our experiments that our system saves 19 40-hour work weeks and $1800 to answer 121,512 visual questions, compared to today's status quo approach BIBREF0 .",
"Our work is also inspired by the goal to improve how to employ crowds to produce the information needed to train and evaluate automated methods. Specifically, researchers in fields as diverse as computer vision BIBREF2 , computational linguistics BIBREF1 , and machine learning BIBREF3 rely on large datasets to improve their VQA algorithms. These datasets include visual questions and human-supplied answers. Such data is critical for teaching machine learning algorithms how to answer questions by example. Such data is also critical for evaluating how well VQA algorithms perform. In general, “bigger\" data is better. Current methods to create these datasets assume a fixed number of human answers per visual question BIBREF2 , BIBREF4 , thereby either compromising on quality by not collecting all plausible answers or cost by collecting additional answers when they are redundant. We offer an economical way to spend a human budget to collect answers from crowd workers. In particular, we aim to actively allocate additional answers only to visual questions likely to have multiple answers.",
"The key contributions of our work are as follows:"
],
[
"The remainder of the paper is organized into four sections. We first describe a study where we investigate: 1) How much answer diversity arises for visual questions? and 2) Why do people disagree (Section SECREF4 )? Next, we explore the following two questions: 1) Given a novel visual question, can a machine correctly predict whether multiple independent members of a crowd would supply the same answer? and 2) If so, what insights does our machine-learned system reveal regarding what humans are most likely to agree about (Section SECREF5 )? In the following section, we propose a novel resource allocation system for efficiently capturing the diversity of all answers for a set of visual questions (Section SECREF6 ). Finally, we end with concluding remarks (Section SECREF7 )."
],
[
"Our first aim is to answer the following questions: 1) How much answer diversity arises for visual questions? and 2) Why do people disagree?"
],
[
"We now explore the following two questions: 1) Given a novel visual question, can a machine correctly predict whether multiple independent members of a crowd would supply the same answer? and 2) If so, what insights does our machine-learned system reveal regarding what humans are most likely to agree about?"
],
[
"We pose the prediction task as a binary classification problem. Specifically, given an image and associated question, a system outputs a binary label indicating whether a crowd will agree on the same answer. Our goal is to design a system that can detect which visual questions to assign a disagreement label, regardless of the disagreement cause (e.g., subjectivity, ambiguity, difficulty). We implement both random forest and deep learning classifiers.",
"A visual question is assigned either an answer agreement or disagreement label. To assign labels, we employ 10 crowdsourced answers for each visual question. A visual question is assigned an answer agreement label when there is an exact string match for 9 of the 10 crowdsourced answers (after answer pre-preprocessing, as discussed in the previous section) and an answer disagreement label otherwise. Our rationale is to permit the possibility of up to one “careless/spam\" answer per visual question. The outcome of our labeling scheme is that a disagreement label is agnostic to the specific cause of disagreement and rather represents the many causes (described above).",
"For our first system, we use domain knowledge to guide the learning process. We compile a set of features that we hypothesize inform whether a crowd will arrive at an undisputed, single answer. Then we apply a machine learning tool to reveal the significance of each feature. We propose features based on the observation that answer agreement often arises when 1) a lay person's attention can be easily concentrated to a single, undisputed region in an image and 2) a lay person would find the requested task easy to address.",
"We employ five image-based features coming from the salient object subitizing BIBREF22 (SOS) method, which produces five probabilities that indicate whether an image contains 0, 1, 2, 3, or 4+ salient objects. Intuitively, the number of salient objects shows how many regions in an image are competing for an observer's attention, and so may correlate with the ease in identifying a region of interest. Moreover, we hypothesize this feature will capture our observation from the previous study that counting problems typically leads to disagreement for images showing many objects, and agreement otherwise.",
"We employ a 2,492-dimensional feature vector to represent the question-based features. One feature is the number of words in the question. Intuitively, a longer question offers more information and we hypothesize additional information makes a question more precise. The remaining features come from two one-hot vectors describing each of the first two words in the question. Each one-hot vector is created using the learned vocabularies that define all possible words at the first and second word location of a question respectively (using training data, as described in the next section). Intuitively, early words in a question inform the type of answers that might be possible and, in turn, possible reasons/frequency for answer disagreement. For example, we expect “why is\" to regularly elicit many opinions and so disagreement. This intuition about the beginning words of a question is also supported by our analysis in the previous section which shows that different answer types yield different biases of eliciting answer agreement versus disagreement.",
"We leverage a random forest classification model BIBREF23 to predict an answer (dis)agreement label for a given visual question. This model consists of an ensemble of decision tree classifiers. We train the system to learn the unique weighted combinations of the aforementioned 2,497 features that each decision tree applies to make a prediction. At test time, given a novel visual question, the trained system converts a 2,497 feature descriptor of the visual question into a final prediction that reflects the majority vote prediction from the ensemble of decision trees. The system returns the final prediction along with a probability indicating the system's confidence in that prediction. We employ the Matlab implementation of random forests, using 25 trees and the default parameters.",
"We next adapt a VQA deep learning architecture BIBREF24 to learn the predictive combination of visual and textual features. The question is encoded with a 1024-dimensional LSTM model that takes in a one-hot descriptor of each word in the question. The image is described with the 4096-dimensional output from the last fully connected layer of the Convolutional Neural Network (CNN), VGG16 BIBREF25 . The system performs an element-wise multiplication of the image and question features, after linearly transforming the image descriptor to 1024 dimensions. The final layer of the architecture is a softmax layer.",
"We train the system to predict (dis)agreement labels with training examples, where each example includes an image and question. At test time, given a novel visual question, the system outputs an unnormalized log probability indicating its belief in both the agreement and disagreement label. For our system's prediction, we convert the belief in the disagreement label into a normalized probability. Consequently, predicted values range from 0 to 1 with lower values reflecting greater likelihood for crowd agreement."
],
[
"We now describe our studies to assess the predictive power of our classification systems to decide whether visual questions will lead to answer (dis)agreement.",
"We capitalize on today's largest visual question answering dataset BIBREF2 to evaluate our prediction system, which includes 369,861 visual questions about real images. Of these, 248,349 visual questions (i.e., Training questions 2015 v1.0) are kept for for training and the remaining 121,512 visual questions (i.e., Validation questions 2015 v1.0) are employed for testing our classification system. This separation of training and testing samples enables us to estimate how well a classifier will generalize when applied to an unseen, independent set of visual questions.",
"To our knowledge, no prior work has directly addressed predicting answer (dis)agreement for visual questions. Therefore, we employ as a baseline a related VQA algorithm BIBREF24 , BIBREF2 which produces for a given visual question an answer with a confidence score. This system parallels the deep learning architecture we adapt. However, it predicts the system's uncertainty in its own answer, whereas we are interested in the humans' collective disagreement on the answer. Still, it is a useful baseline to see if an existing algorithm could serve our purpose.",
"We evaluate the predictive power of the classification systems based on each classifier's predictions for the 121,512 visual questions in the test dataset. We first show performance of the baseline and our two prediction systems using precision-recall curves. The goals are to achieve a high precision, to minimize wasting crowd effort when their efforts will be redundant, and a high recall, to avoid missing out on collecting the diversity of accepted answers from a crowd. We also report the average precision (AP), which indicates the area under a precision-recall curve. AP values range from 0 to 1 with better-performing prediction systems having larger values.",
"Figure FIGREF8 a shows precision-recall curves for all prediction systems. Both our proposed classification systems outperform the VQA Algorithm BIBREF2 baseline; e.g., Ours - RF yields a 12 percentage point improvement with respect to AP. This is interesting because it shows there is value in learning the disagreement task specifically, rather than employing an algorithm's confidence in its answers. More generally, our results demonstrate it is possible to predict whether a crowd will agree on a single answer from a given image and associated question. Despite the significant variety of questions and image content, and despite the variety of reasons for which the crowd can disagree, our learned model is able to produce quite accurate results.",
"We observe our Random Forest classifier outperforms our deep learning classifier; e.g., Ours: RF yields a three percentage point improvement with respect to AP while consistently yielding improved precision-recall values over Ours: LSTM-CNN (Figure FIGREF8 a). In general, deep learning systems hold promise to replace handcrafted features to pick out the discriminative features. Our baselines highlight a possible value in developing a different deep learning architecture for the problem of learning answer disagreement than applied for predicting answers to visual questions.",
"We show examples of prediction results where our top-performing RF classifier makes its most confident predictions (Figure FIGREF8 b). In these examples, the predictor expects human agreement for “what room... ?\" visual questions and disagreement for “why... ?\" visual questions. These examples highlight that the classifier may have a strong language prior towards making predictions, as we will discuss in the next section.",
"We now explore what makes a visual question lead to crowd answer agreement versus disagreement. We examine the influence of whether visual questions lead to the three types of answers (“yes/no\", “number\", “other\") for both our random forest (RF) and deep learning (DL) classification systems. We enrich our analysis by examining the predictive performance of both classifiers when they are trained and tested exclusively with image and question features respectively. Figure FIGREF9 shows precision-recall curves for both classification systems with question features alone (Q), image features alone (I), and both question and image features together (Q+I).",
"When comparing AP scores (Figure FIGREF9 ), we observe our Q+I predictors yield the greatest predictive performance for visual questions that lead to “other\" answers, followed by “number\" answers, and finally “yes/no\" answers. One possible reason for this finding is that the question wording strongly drives whether a crowd will disagree for “other\" visual questions, whereas some notion of common sense may be required to learn whether a crowd will agree for “yes/no\" visual questions (e.g., Figure FIGREF7 a vs Figure FIGREF7 g).",
"We observe that question-based features yield greater predictive performance than image-based features for all visual questions, when comparing AP scores for Q and I classification results (Figure FIGREF9 ). In fact, image features contribute to performance improvements only for our random forest classifier for visual questions that lead to “number\" answers, as illustrated by comparing AP scores for Our RF: Q+I and Our RF: Q (Figure FIGREF9 b). Our overall finding that most of the predictive power stems from language-based features parallels feature analysis findings in the automated VQA literature BIBREF2 , BIBREF21 . This does not mean, however, that the image content is not predictive. Further work improving visual content cues for VQA agreement is warranted.",
"Our findings suggest that our Random Forest classifier's overall advantage over our deep learning system arises because of counting questions, as indicated by higher AP scores (Figure FIGREF9 ). For example, the advantage of the initial higher precision (Figure FIGREF8 a; Ours: RF vs Ours: DL) is also observed for counting questions (Figure FIGREF9 b; Ours: RF - Q+I vs Ours: DL - Q+I). We hypothesize this advantage arises due to the strength of the Random Forest classifier in pairing the question prior (“How many?\") with the image-based SOS features that indicates the number of objects in an image. Specifically, we expect “how many\" to lead to agreement only for small counting problems."
],
[
"We next present a novel resource allocation system for efficiently capturing the diversity of true answers for a batch of visual questions. Today's status quo is to either uniformly collect INLINEFORM0 answers for every visual question BIBREF2 or collect multiple answers where the number is determined by external crowdsourcing conditions BIBREF0 . Our system instead spends a human budget by predicting the number of answers to collect for each visual question based on whether multiple human answers are predicted to be redundant."
],
[
"Suppose we have a budget INLINEFORM0 which we can allocate to collect extra answers for a subset of visual questions. Our system automatically decides to which visual questions to allocate the “extra\" answers in order to maximize captured answer diversity for all visual questions.",
"The aim of our system is to accrue additional costs and delays from collecting extra answers only when extra responses will provide more information. Towards this aim, our system involves three steps to collect answers for all INLINEFORM0 visual questions (Figure FIGREF11 a). First, the system applies our top-performing random forest classifier to every visual question in the batch. Then, the system ranks the INLINEFORM1 visual questions based on predicted scores from the classifier, from visual questions most confidently predicted to lead to answer “agreement\" from a crowd to those most confidently predicted to lead to answer “disagreement\" from a crowd. Finally, the system solicits more ( INLINEFORM2 ) human answers for the INLINEFORM3 visual questions predicted to reflect the greatest likelihood for crowd disagreement and fewer ( INLINEFORM4 ) human answers for the remaining visual questions. More details below."
],
[
"We now describe our studies to assess the benefit of our allocation system to reduce human effort to capture the diversity of all answers to visual questions.",
"We evaluate the impact of actively allocating extra human effort to answer visual questions as a function of the available budget of human effort. Specifically, for a range of budget levels, we compute the total measured answer diversity (as defined below) resulting for the batch of visual questions. The goal is to capture a large amount of answer diversity with little human effort.",
"We conduct our studies on the 121,512 test visual questions about real images (i.e., Validation questions 2015 v1.0). For each visual question, we establish the set of true answers as all unique answers which are observed at least twice in the 10 crowdsourced answers per visual question. We require agreement by two workers to avoid the possibility that “careless/spam\" answers are treated as ground truth.",
"We collect either the minimum of INLINEFORM0 answer per visual question or the maximum of INLINEFORM1 answers per visual question. Our number of answers roughly aligns with existing crowd-powered VQA systems, for example with VizWiz, “On average, participants received 3.3 (SD=1.8) answers for each question\" BIBREF0 . Our maximum number of answers also supports the possibility of capturing the maximum of three unique, valid answers typically observed in practice (recall study above). While more elaborate schemes for distributing responses may be possible, we will show this approach already proves quite effective in our experiments. We simulate answer collection by randomly selecting answers from the 10 crowd answers per visual question.",
"We compare our approach to the following baselines:",
" ",
"As in the previous section, we leverage the output confidence score from the publicly-shared model BIBREF24 learned from a LSTM-CNN deep learning architecture to rank the order of priority for visual questions to receive redundancy.",
" ",
"The system randomly prioritizes which images receive redundancy. This predictor illustrates the best a user can achieve today with crowd-powered systems BIBREF0 , BIBREF5 or with current dataset collection methods BIBREF2 , BIBREF4 .",
"We quantify the total diversity of answers captured by a resource allocation system for a batch of visual questions INLINEFORM0 as follows: DISPLAYFORM0 ",
"where INLINEFORM0 represents the set of all true answers for the INLINEFORM1 -th visual question, INLINEFORM2 represents the set of unique answers captured in the INLINEFORM3 answers collected for the INLINEFORM4 -th visual question, and INLINEFORM5 represents the set of unique answers captured in the INLINEFORM6 answers collected for the INLINEFORM7 -th visual question. Given no extra human budget, total diversity comes from the second term which indicates the diversity captured when only INLINEFORM8 answers are collected for every visual question. Given a maximum available extra human budget ( INLINEFORM9 ), total diversity comes from the first term which indicates the diversity captured when INLINEFORM10 answers are collected for every visual question. Given a partial extra human budget ( INLINEFORM11 ), the aim is to have perfect predictions such that the minimum number of answers ( INLINEFORM12 ) are allocated only for visual questions with one true answer so that all diverse answers are safely captured.",
"We measure diversity per visual question as the number of all true answers collected per visual question ( INLINEFORM0 ). Larger values reflect greater captured diversity. The motivation for this measure is to only give total credit to visual questions when all valid, unique human answers are collected.",
"Our system consistently offers significant gains over today's status quo approach (Figure FIGREF11 b). For example, our system accelerates the collection of 70% of the diversity by 21% over the Status Quo baseline. In addition, our system accelerates the collection of the 82% of diversity one would observe with VizWiz by 23% (i.e., average of 3.3 answers per visual question). In absolute terms, this means eliminating the collection of 92,180 answers with no loss to captured answer diversity. This translates to eliminating 19 40-hour work weeks and saving over $1800, assuming workers are paid $0.02 per answer and take 30 seconds to answer a visual question. Our approach fills an important gap in the crowdsourcing answer collection literature for targeting the allocation of extra answers only to visual questions where a diversity of answers is expected.",
"Figure FIGREF11 b also illustrates the advantage of our system over a related VQA algorithm BIBREF2 for our novel application of cost-sensitive answer collection from a crowd. As observed, relying on an algorithm's confidence in its answer offers a valuable indicator over today's status quo of passively budgeting. While we acknowledge this method is not intended for our task specifically, it still serves as an important baseline (as discussed above). We attribute the further performance gains of our prediction system to it directly predicting whether humans will disagree rather than predicting a property of a specific algorithm (e.g., confidence of the Antol et al. algorithm in its answer prediction)."
],
[
"We proposed a new problem of predicting whether different people would answer with the same response to the same visual question. Towards motivating the practical implications for this problem, we analyzed nearly half a million visual questions and demonstrated there is nearly a 50/50 split between visual questions that lead to answer agreement versus disagreement. We observed that crowd disagreement arose for various types of answers (yes/no, counting, other) for many different reasons. We next proposed a system that automatically predicts whether a visual question will lead to a single versus multiple answers from a crowd. Our method outperforms a strong existing VQA system limited to estimating system uncertainty rather than crowd disagreement. Finally, we demonstrated how to employ the prediction system to accelerate the collection of diverse answers from a crowd by typically at least 20% over today's status quo of fixed redundancy allocation."
],
[
"The authors gratefully acknowledge funding from the Office of Naval Research (ONR YIP N00014-12-1-0754) and National Science Foundation (IIS-1065390). We thank Dinesh Jayaraman, Yu-Chuan Su, Suyog Jain, and Chao-Yeh Chen for their assistance with experiments."
]
],
"section_name": [
"Introduction",
"Paper Overview",
"VQA - Analysis of Answer (Dis)Agreements",
"Visual Question - Predicting If a Crowd Will (Dis)Agree on an Answer",
"Prediction Systems",
"Analysis of Prediction System",
"Capturing Answer Diversity with Less Effort",
"Answer Collection System",
"Analysis of Answer Collection System",
"Conclusions",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"6d0c5644c2dfcde95e0ef7e4ec19d9610e7ef759",
"73313eac562e3d24d9b706ac9572d7008d36ffb1"
],
"answer": [
{
"evidence": [
"We next adapt a VQA deep learning architecture BIBREF24 to learn the predictive combination of visual and textual features. The question is encoded with a 1024-dimensional LSTM model that takes in a one-hot descriptor of each word in the question. The image is described with the 4096-dimensional output from the last fully connected layer of the Convolutional Neural Network (CNN), VGG16 BIBREF25 . The system performs an element-wise multiplication of the image and question features, after linearly transforming the image descriptor to 1024 dimensions. The final layer of the architecture is a softmax layer."
],
"extractive_spans": [],
"free_form_answer": "LSTM to encode the question, VGG16 to extract visual features. The outputs of LSTM and VGG16 are multiplied element-wise and sent to a softmax layer.",
"highlighted_evidence": [
"The question is encoded with a 1024-dimensional LSTM model that takes in a one-hot descriptor of each word in the question. The image is described with the 4096-dimensional output from the last fully connected layer of the Convolutional Neural Network (CNN), VGG16 BIBREF25 . The system performs an element-wise multiplication of the image and question features, after linearly transforming the image descriptor to 1024 dimensions. The final layer of the architecture is a softmax layer."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We leverage a random forest classification model BIBREF23 to predict an answer (dis)agreement label for a given visual question. This model consists of an ensemble of decision tree classifiers. We train the system to learn the unique weighted combinations of the aforementioned 2,497 features that each decision tree applies to make a prediction. At test time, given a novel visual question, the trained system converts a 2,497 feature descriptor of the visual question into a final prediction that reflects the majority vote prediction from the ensemble of decision trees. The system returns the final prediction along with a probability indicating the system's confidence in that prediction. We employ the Matlab implementation of random forests, using 25 trees and the default parameters.",
"We next adapt a VQA deep learning architecture BIBREF24 to learn the predictive combination of visual and textual features. The question is encoded with a 1024-dimensional LSTM model that takes in a one-hot descriptor of each word in the question. The image is described with the 4096-dimensional output from the last fully connected layer of the Convolutional Neural Network (CNN), VGG16 BIBREF25 . The system performs an element-wise multiplication of the image and question features, after linearly transforming the image descriptor to 1024 dimensions. The final layer of the architecture is a softmax layer."
],
"extractive_spans": [
"random forest",
"The question is encoded with a 1024-dimensional LSTM model that takes in a one-hot descriptor of each word in the question. The image is described with the 4096-dimensional output from the last fully connected layer of the Convolutional Neural Network (CNN), VGG16 BIBREF25 . The system performs an element-wise multiplication of the image and question features, after linearly transforming the image descriptor to 1024 dimensions. The final layer of the architecture is a softmax layer."
],
"free_form_answer": "",
"highlighted_evidence": [
"We leverage a random forest classification model BIBREF23 to predict an answer (dis)agreement label for a given visual question.",
"We next adapt a VQA deep learning architecture BIBREF24 to learn the predictive combination of visual and textual features. The question is encoded with a 1024-dimensional LSTM model that takes in a one-hot descriptor of each word in the question. The image is described with the 4096-dimensional output from the last fully connected layer of the Convolutional Neural Network (CNN), VGG16 BIBREF25 . The system performs an element-wise multiplication of the image and question features, after linearly transforming the image descriptor to 1024 dimensions. The final layer of the architecture is a softmax layer."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe"
]
},
{
"annotation_id": [
"21ff674766c8e5f90e1e21e5c60f82057f93cdc0"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Fig. 6: We propose a novel application of predicting the number of redundant answers to collect from the crowd per visual question to efficiently capture the diversity of all answers for all visual questions. (a) For a batch of visual questions, our system first produces a relative ordering using the predicted confidence in whether a crowd would agree on an answer (upper half). Then, the system allocates a minimum number of annotations to all visual questions (bottom, left half) and then the extra available human budget to visual questions most confidently predicted to lead to crowd disagreement (bottom, right half). (b) For 121,512 visual questions, we show results for our system, a related VQA algorithm, and today’s status quo of random predictions. Boundary conditions are one answer (leftmost) and five answers (rightmost) for all visual questions. Our approach typically accelerates the capture of answer diversity by over 20% from today’s Status Quo selection; e.g., 21% for 70% of the answer diversity and 23% for 86% of the answer diversity. This translates to saving over 19 40-hour work weeks and $1800, assuming 30 seconds and $0.02 per answer."
],
"extractive_spans": [],
"free_form_answer": "The number of redundant answers to collect from the crowd is predicted to efficiently capture the diversity of all answers from all visual questions.",
"highlighted_evidence": [
"FLOAT SELECTED: Fig. 6: We propose a novel application of predicting the number of redundant answers to collect from the crowd per visual question to efficiently capture the diversity of all answers for all visual questions. (a) For a batch of visual questions, our system first produces a relative ordering using the predicted confidence in whether a crowd would agree on an answer (upper half). Then, the system allocates a minimum number of annotations to all visual questions (bottom, left half) and then the extra available human budget to visual questions most confidently predicted to lead to crowd disagreement (bottom, right half). (b) For 121,512 visual questions, we show results for our system, a related VQA algorithm, and today’s status quo of random predictions. Boundary conditions are one answer (leftmost) and five answers (rightmost) for all visual questions. Our approach typically accelerates the capture of answer diversity by over 20% from today’s Status Quo selection; e.g., 21% for 70% of the answer diversity and 23% for 86% of the answer diversity. This translates to saving over 19 40-hour work weeks and $1800, assuming 30 seconds and $0.02 per answer."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
}
],
"nlp_background": [
"five",
"five"
],
"paper_read": [
"no",
"no"
],
"question": [
"What is the model architecture used?",
"How is the data used for training annotated?"
],
"question_id": [
"ecd5770cf8cb12cb34285e26ab834301c17c53e1",
"4a4ce942a7a6efd1fa1d6c91dedf7a89af64b729"
],
"question_writer": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
],
"search_query": [
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Fig. 1: Examples of visual questions and corresponding answers, when 10 different people are asked to answer the same question about an image (“visual question”). As observed, the crowd sometimes all agree on a single answer (top) and at other times offer different answers (bottom). Given a visual question, we aim to build a prediction system that automatically decides whether multiple people would give the same answer.",
"TABLE I: Correlation between answer agreement and visual questions that elicit three different types of answers for VQAs on real images and abstract scenes. Shown for each answer type is the percentage of visual questions that lead to at most one disagreement (row 1), unanimous agreement (row 2), and exactly one disagreement (row 3) from 10 crowdsourced answers. On average, across all answer types for both datasets, the crowd agrees on the answer for nearly half (i.e., 53%) of all VQAs. Moreover, we observe crowd disagreement arises often for all three answer types, highlighting that the significance of crowd disagreement is applicable across various types of visual questions.",
"Fig. 2: Summary of answer diversity outcomes showing how frequently different numbers of unique answers arise when asking ten crowd workers to answer a visual question for (a) 369,861 visual questions about real images and (a) 90,000 visual questions about abstract scenes. Results are shown based on different degrees of answer agreement required to make an answer valid: only one person has to offer the answer, at least two people must agree on the answer, and at least three people must agree on the answer. The visual questions most often elicit exactly one answer per visual question but also regularly elicit up to three different answers per visual question.",
"Fig. 3: Illustration of visual questions that lead humans to (dis)agree on a single answer. As observed, (a) unanimous answer agreement arises when images are simple, questions are precise, and questions are visually grounded. (b-i) Answer disagreement arises for a variety of reasons: (b) expert skill needed, (c) human mistakes, (d) ambiguous question, (e) ambiguous visual content, (f) insufficient visual evidence, (g) subjective question, (h) answer synonyms, and (i) varying answer granularity.",
"Fig. 4: (a) Precision-recall curves and average precision (AP) scores for all benchmarked systems. Our random forest (RF) and deep learning (DL) classifiers outperform a related automated VQA baseline, showing the importance in modeling human disagreement as opposed to system uncertainty. (b) Examples of prediction results from our top-performing RF classifier. Shown are the top three visual questions for the most confidently predicted instances that lead to answer disagreement and agreement along with the observed crowd answers. These examples illustrate a strong language prior for making predictions. (Best viewed on pdf.)",
"Fig. 5: Precision-recall curves and average precision (AP) scores for our random forest (RF) and deep learning (DL) classifiers with different features (Question Only - Q; Image Only - I; Q +I) for visual questions that lead to (a-c) three answer types.",
"Fig. 6: We propose a novel application of predicting the number of redundant answers to collect from the crowd per visual question to efficiently capture the diversity of all answers for all visual questions. (a) For a batch of visual questions, our system first produces a relative ordering using the predicted confidence in whether a crowd would agree on an answer (upper half). Then, the system allocates a minimum number of annotations to all visual questions (bottom, left half) and then the extra available human budget to visual questions most confidently predicted to lead to crowd disagreement (bottom, right half). (b) For 121,512 visual questions, we show results for our system, a related VQA algorithm, and today’s status quo of random predictions. Boundary conditions are one answer (leftmost) and five answers (rightmost) for all visual questions. Our approach typically accelerates the capture of answer diversity by over 20% from today’s Status Quo selection; e.g., 21% for 70% of the answer diversity and 23% for 86% of the answer diversity. This translates to saving over 19 40-hour work weeks and $1800, assuming 30 seconds and $0.02 per answer."
],
"file": [
"1-Figure1-1.png",
"4-TableI-1.png",
"4-Figure2-1.png",
"5-Figure3-1.png",
"6-Figure4-1.png",
"7-Figure5-1.png",
"8-Figure6-1.png"
]
} | [
"What is the model architecture used?",
"How is the data used for training annotated?"
] | [
[
"1608.08188-Prediction Systems-5",
"1608.08188-Prediction Systems-6"
],
[
"1608.08188-8-Figure6-1.png"
]
] | [
"LSTM to encode the question, VGG16 to extract visual features. The outputs of LSTM and VGG16 are multiplied element-wise and sent to a softmax layer.",
"The number of redundant answers to collect from the crowd is predicted to efficiently capture the diversity of all answers from all visual questions."
] | 240 |
1805.06966 | Neural User Simulation for Corpus-based Policy Optimisation for Spoken Dialogue Systems | User Simulators are one of the major tools that enable offline training of task-oriented dialogue systems. For this task the Agenda-Based User Simulator (ABUS) is often used. The ABUS is based on hand-crafted rules and its output is in semantic form. Issues arise from both properties such as limited diversity and the inability to interface a text-level belief tracker. This paper introduces the Neural User Simulator (NUS) whose behaviour is learned from a corpus and which generates natural language, hence needing a less labelled dataset than simulators generating a semantic output. In comparison to much of the past work on this topic, which evaluates user simulators on corpus-based metrics, we use the NUS to train the policy of a reinforcement learning based Spoken Dialogue System. The NUS is compared to the ABUS by evaluating the policies that were trained using the simulators. Cross-model evaluation is performed i.e. training on one simulator and testing on the other. Furthermore, the trained policies are tested on real users. In both evaluation tasks the NUS outperformed the ABUS. | {
"paragraphs": [
[
"Spoken Dialogue Systems (SDS) allow human-computer interaction using natural speech. Task-oriented dialogue systems, the focus of this work, help users achieve goals such as finding restaurants or booking flights BIBREF0 .",
"Teaching a system how to respond appropriately in a task-oriented setting is non-trivial. In state-of-the-art systems this dialogue management task is often formulated as a reinforcement learning (RL) problem BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . In this framework, the system learns by a trial and error process governed by a reward function. User Simulators can be used to train the policy of a dialogue manager (DM) without real user interactions. Furthermore, they allow an unlimited number of dialogues to be created with each dialogue being faster than a dialogue with a human.",
"In this paper the Neural User Simulator (NUS) is introduced which outputs natural language and whose behaviour is learned from a corpus. The main component, inspired by BIBREF4 , consists of a feature extractor and a neural network based sequence-to-sequence model BIBREF5 . The sequence-to-sequence model consists of a recurrent neural network (RNN) encoder that encodes the dialogue history and a decoder RNN which outputs natural language. Furthermore, the NUS generates its own goal and possibly changes it during a dialogue. This allows the model to be deployed for training more sophisticated DM policies. To achieve this, a method is proposed that transforms the goal-labels of the used dataset (DSTC2) into labels whose behaviour can be replicated during deployment.",
"The NUS is trained on dialogues between real users and an SDS in a restaurant recommendation domain. Compared to much of the related work on user simulation, we use the trained NUS to train the policy of a reinforcement learning based SDS. In order to evaluate the NUS, an Agenda-Based User-Simulator (ABUS) BIBREF6 is used to train another policy. The two policies are compared against each other by using cross-model evaluation BIBREF7 . This means to train on one model and to test on the other. Furthermore, both trained policies are tested on real users. On both evaluation tasks the NUS outperforms the ABUS, which is currently one of the most popular off-line training tools for reinforcement learning based Spoken Dialogue Systems BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 .",
"The remainder of this paper is organised as follows. Section 2 briefly describes task-oriented dialogue. Section 3 describes the motivation for the NUS and discusses related work. Section 4 explains the structure of the NUS, how it is trained and how it is deployed for training a DM's policy. Sections 5 and 6 present the experimental setup and results. Finally, Section 7 gives conclusions."
],
[
"A Task-Oriented SDS is typically designed according to a structured ontology, which defines what the system can talk about. In a system recommending restaurants the ontology defines those attributes of a restaurant that the user can choose, called informable slots (e.g. different food types, areas and price ranges), the attributes that the user can request, called requestable slots (e.g. phone number or address) and the restaurants that it has data about. An attribute is referred to as a slot and has a corresponding value. Together these are referred to as a slot-value pair (e.g. area=north).",
"Using RL the DM is trained to act such that is maximises the cumulative future reward. The process by which the DM chooses its next action is called its policy. A typical approach to defining the reward function for a task-oriented SDS is to apply a small per-turn penalty to encourage short dialogues and to give a large positive reward at the end of each successful interaction."
],
[
"Ideally the DM's policy would be trained by interacting with real users. Although there are models that support on-line learning BIBREF15 , for the majority of RL algorithms, which require a lot of interactions, this is impractical. Furthermore, a set of users needs to be recruited every time a policy is trained. This makes common practices such as hyper-parameter optimization prohibitively expensive. Thus, it is natural to try to learn from a dataset which needs to be recorded only once, but can be used over and over again.",
"A problem with learning directly from recorded dialogue corpora is that the state space that was visited during the collection of the data is limited; the size of the recorded corpus usually falls short of the requirements for training a statistical DM. However, even if the size of the corpus is large enough the optimal dialogue strategy is likely not to be contained within it.",
"A solution is to transform the static corpus into a dynamic tool: a user simulator. The user simulator (US) is trained on a dialogue corpus to learn what responses a real user would provide in a given dialogue context. The US is trained using supervised learning since the aim is for it to learn typical user behaviour. For the DM, however, we want optimal behaviour which is why supervised learning cannot be used. By interacting with the SDS, the trained US can be used to train the DM's policy. The DM's policy is optimised using the feedback given by either the user simulator or a separate evaluator. Any number of dialogues can be generated using the US and dialogue strategies that are not in the recorded corpus can be explored.",
"Most user-simulators work on the level of user semantics. These usually consist of a user dialogue act (e.g. inform, or request) and a corresponding slot-value pair. The first statistical user simulator BIBREF16 used a simple bi-gram model INLINEFORM0 to predict the next user act INLINEFORM1 given the last system act INLINEFORM2 . It has the advantage of being purely probabilistic and domain-independent. However, it does not take the full dialogue history into account and is not conditioned on a goal, leading to incoherent user behaviour throughout a dialogue. BIBREF17 , BIBREF18 attempted to overcome goal inconsistency by proposing a graph-based model. However, developing the graph structure requires extensive domain-specific knowledge. BIBREF19 combined features from Sheffler and Young's work with Eckert's Model, by conditioning a set of probabilities on an explicit representation of the user goal and memory. A Markov Model is also used by BIBREF20 . It uses a large feature vector to describe the user's current state, which helps to compensate for the Markov assumption. However, the model is not conditioned on any goal. Therefore, it is not used to train a dialogue policy since it is impossible to determine whether the user goal was fulfilled. A hidden Markov model was proposed by BIBREF21 , which was also not used to train a policy. BIBREF22 cast user simulation as an inverse reinforcement learning problem where the user is modelled as a decision-making agent. The model did not incorporate a user goal and was hence not used to train a policy. The most prominent user model for policy optimisation is the Agenda-Based User Simulator BIBREF6 , which represents the user state elegantly as a stack of necessary user actions, called the agenda. The mechanism that generates the user response and updates the agenda does not require any data, though it can be improved using data. The model is conditioned on a goal for which it has update rules in case the dialogue system expresses that it cannot fulfil the goal. BIBREF4 modelled user simulation as a sequence-to-sequence task. The model can keep track of the dialogue history and user behaviour is learned entirely from data. However, goal changes were not modelled, even though a large proportion of dialogues within their dataset (DSTC2) contains goal changes. Their model outperformed the ABUS on statistical metrics, which is not surprising given that it was trained by optimising a statistical metric and the ABUS was not.",
"The aforementioned work focuses on user simulation at the semantic level. Multiple issues arise from this approach. Firstly, annotating the user-response with the correct semantics is costly. More data could be collected, if the US were to output natural language. Secondly, research suggests that the two modules of an SDS performing Spoken Language Understanding (SLU) and belief tracking should be jointly trained as a single entity BIBREF23 , BIBREF24 , BIBREF25 , BIBREF26 , BIBREF27 . In fact in the second Dialogue State Tracking Challenge (DSTC2) BIBREF28 , the data of which this work uses, systems which used no external SLU module outperformed all systems that only used an external SLU Module. Training the policy of a DM in a simulated environment, when also using a joint system for SLU and belief tracking is not possible without a US that produces natural language. Thirdly, a US is sometimes augmented with an error model which generates a set of competing hypotheses with associated confidence scores trying to replicate the errors of the speech recogniser. When the error model matches the characteristics of the speech recogniser more accurately, the SDS performs better BIBREF29 . However, speech recognition errors are badly modelled based on user semantics since they arise (mostly) due to the phonetics of the spoken words and not their semantics BIBREF30 . Thus, an SDS that is trained with a natural language based error model is likely to outperform one trained with a semantic error model when tested on real users. Sequence-to-sequence learning for word-level user simulation is performed in BIBREF31 , though the model is not conditioned on any goal and hence not used for policy optimisation. A word-level user simulator was also used in BIBREF32 where it was built by augmenting the ABUS with a natural language generator."
],
[
"An overview of the NUS is given in Figure FIGREF2 . At the start of a dialogue a random goal INLINEFORM0 is generated by the Goal Generator. The possibilities for INLINEFORM1 are defined by the ontology. In dialogue turn INLINEFORM2 , the output of the SDS ( INLINEFORM3 ) is passed to the NUS's Feature Extractor, which generates a feature vector INLINEFORM4 based on INLINEFORM5 , the current user goal, INLINEFORM6 , and parts of the dialogue history. This vector is appended to the Feature History INLINEFORM7 . This sequence is passed to the sequence-to-sequence model (Fig. FIGREF7 ), which will generate the user's length INLINEFORM8 utterance INLINEFORM9 . As in Figure FIGREF7 , words in INLINEFORM10 corresponding to a slot are replaced by a slot token; a process called delexicalisation. If the SDS expresses to the NUS that there is no venue matching the NUS's constraints, the goal will be altered by the Goal Generator."
],
[
"The Goal Generator generates a random goal INLINEFORM0 at the start of the dialogue. It consists of a set of constraints, INLINEFORM1 , which specify the required venue e.g. (food=Spanish, area=north) and a number of requests, INLINEFORM2 , that specify the information that the NUS wants about the final venue e.g. the address or the phone number. The possibilities for INLINEFORM3 and INLINEFORM4 are defined by the ontology. In DSTC2 INLINEFORM5 can consist of a maximum of three constraints; food, area and pricerange. Whether each of the three is present is independently sampled with a probability of 0.66, 0.62 and 0.58 respectively. These probabilities were estimated from the DSTC2 data set. If no constraint is sampled then the goal is re-sampled. For each slot in INLINEFORM6 a value (e.g. north for area) is sampled uniformly from the ontology. Similarly, the presence of a request is independently sampled, followed by re-sampling if zero requests were chosen.",
"When training the sequence-to-sequence model, the Goal Generator is not used, but instead the goal labels from the DSTC2 dataset are used. In DSTC2 one goal-label is given to the entire dialogue. This goal is always the final goal. If the user's goal at the start of the dialogue is (food=eritrean, area=south), which is changed to (food=spanish, area=south), due to the non-existence of an Eritrean restaurant in the south, using only the final goal is insufficient to model the dialogue. The final goal can only be used for the requests as they are not altered during a dialogue. DSTC2 also provides turn-specific labels. These contain the constraints and requests expressed by the user up until and including the current turn. When training a policy with the NUS, such labels would not be available as they “predict the future\", i.e. when the turn-specific constraints change from (area=south) to (food=eritrean, area=south) it means that the user will inform the system about her desire to eat Eritrean food in the current turn.",
"In related work on user-simulation for which the DSTC2 dataset was used, the final goal was used for the entire dialogue BIBREF4 , BIBREF33 , BIBREF34 . As stated above, we do not believe this to be sufficient. The following describes how to update the turn-specific constraint labels such that their behaviour can be replicated when training a DM's policy, whilst allowing goal changes to be modelled. The update strategy is illustrated in Table TABREF4 with an example. The final turn keeps its constraints, from which we iterate backwards through the list of DSTC2's turn-specific constraints. The constraints of a turn will be set to the updated constraints of the succeeding turn, besides if the same slot is present with a different value. In that case the value will be kept. The behaviour of the updated turn-specific goal-labels can be replicated when the NUS is used to train a DM's policy. In the example, the food type changed due to the SDS expressing that there is no restaurant serving Eritrean food in the south. When deploying the NUS to train a policy, the goal is updated when the SDS outputs the canthelp dialogue act."
],
[
"The Feature Extractor generates the feature vector that is appended to the sequence of feature vectors, here called Feature History, that is passed to the sequence-to-sequence model. The input to the Feature Extractor is the output of the DM and the current goal INLINEFORM0 . Furthermore, as indicated in Figure FIGREF2 , the Feature Extractor keeps track of the currently accepted venue as well as the current and initial request-vector, which is explained below.",
"The feature vector INLINEFORM0 is made up of four sub-vectors. The motivation behind the way in which these four vectors were designed is to provide an embedding for the system response that preserves all necessary value-independent information.",
"The first vector, machine-act vector INLINEFORM0 , encodes the dialogue acts of the system response and consists of two parts; INLINEFORM1 . INLINEFORM2 is a binary representation of the system dialogue acts present in the input. Its length is thus the number of possible system dialogue acts. It is binary and not one-hot since in DSTC2 multiple dialogue acts can be in the system's response. INLINEFORM3 is a binary representation of the slot if the dialogue act is request or select and if it is inform or expl-conf together with a correct slot-value pair for an informable slot. The length is four times the number of informable slots. INLINEFORM4 is necessary due to the dependence of the sentence structure on the exact slot mentioned by the system. The utterances of a user in response to request(food) and request(area) are often very different.",
"The second vector, request-vector INLINEFORM0 , is a binary representation of the requests that have not yet been fulfilled. It's length is thus the number of requestable slots. In comparison to the other three vectors the feature extractor needs to remember it for the next turn. At the start of the dialogue the indices corresponding to requests that are in INLINEFORM1 are set to 1 and the rest to 0. Whenever the system informs a certain request the corresponding index in INLINEFORM2 is set to 0. When a new venue is proposed INLINEFORM3 is reset to the original request vector, which is why the Feature Extractor keeps track of it.",
"The third vector, inconsistency-vector INLINEFORM0 , represents the inconsistency between the system's response and INLINEFORM1 . Every time a slot is mentioned by the system, when describing a venue (inform) or confirming a slot-value pair (expl-conf or impl-conf), the indices corresponding to the slots that have been misunderstood are set to 1. The length of INLINEFORM2 is the number of informable slots. This vector is necessary in order for the NUS to correct the system.",
"The fourth vector, INLINEFORM0 , is a binary representation of the slots that are in the constraints INLINEFORM1 . It's length is thus the number of informable slots. This vector is necessary in order for the NUS to be able to inform about its preferred venue."
],
[
"The sequence-to-sequence model (Figure FIGREF7 ) consists of an RNN encoder, followed by a fully-connect layer and an RNN decoder. An RNN can be defined as: DISPLAYFORM0 ",
" At time-step INLINEFORM0 , an RNN uses an input INLINEFORM1 and an internal state INLINEFORM2 to produce its output INLINEFORM3 and its new internal state INLINEFORM4 . A specific RNN-design is usually defined using matrix multiplications, element-wise additions and multiplications as well as element-wise non-linear functions. There are a plethora of different RNN architectures that could be used and explored. Given that such exploration is not the focus of this work a single layer LSTM BIBREF35 is used for both the RNN encoder and decoder. The exact LSTM version used in this work uses a forget gate without bias and does not use peep-holes.",
"The first RNN (shown as white blocks in Fig. FIGREF7 ) takes one feature vector INLINEFORM0 at a time as its input ( INLINEFORM1 ). If the current dialogue turn is turn INLINEFORM2 then the final output of the RNN encoder is given by INLINEFORM3 , which is passed through a fully-connected layer (shown as the light-grey block) with linear activation function: DISPLAYFORM0 ",
" For a certain encoding INLINEFORM0 the sequence-to-sequence model should define a probability distribution over different sequences. By sampling from this distribution the NUS can generate a diverse set of sentences corresponding to the same dialogue context. The conditional probability distribution of a length INLINEFORM1 sequence is defined as: DISPLAYFORM0 ",
" The decoder RNN (shown as dark blocks) will be used to model INLINEFORM0 . It's input at each time-step is the concatenation of an embedding INLINEFORM1 (we used 1-hot) of the previous word INLINEFORM2 ( INLINEFORM3 ). For INLINEFORM4 a start-of-sentence (<SOS>) token is used as INLINEFORM5 . The end of the utterance is modelled using an end-of-sentence (<EOS>) token. When the decoder RNN generates the end-of-sentence token, the decoding process is terminated. The output of the decoder RNN, INLINEFORM6 , is passed through an affine transform followed by the softmax function, SM, to form INLINEFORM7 . A word INLINEFORM8 can be obtained by either taking the word with the highest probability or sampling from the distribution: DISPLAYFORM0 ",
" During training the words are not sampled from the output distribution, but instead the true words from the dataset are used. This a common technique that is often referred to as teacher-forcing, though it also directly follows from equation EQREF10 .",
"To generate a sequence using an RNN, beam-search is often used. Using beam-search with INLINEFORM0 beams, the words corresponding to the top INLINEFORM1 probabilities of INLINEFORM2 are the first INLINEFORM3 beams. For each succeeding INLINEFORM4 , the INLINEFORM5 words corresponding to the top INLINEFORM6 probabilities of INLINEFORM7 are taken for each of the INLINEFORM8 beams. This is followed by reducing the number of beams from now INLINEFORM9 down to INLINEFORM10 , by taking the INLINEFORM11 beams with the highest probability INLINEFORM12 . This is a deterministic process. However, for the NUS to always give the same response in the same context is not realistic. Thus, the NUS cannot cover the full breadth of user behaviour if beam-search is used. To solve this issue while keeping the benefit of rejecting sequences with low probability, a type of beam-search with sampling is used. The process is identical to the above, but INLINEFORM13 words per beam are sampled from the probability distribution. The NUS is now non-deterministic resulting in a diverse US. Using 2 beams gave a good trade-off between reasonable responses and diversity."
],
[
"The neural sequence-to-sequence model is trained to maximize the log probability that it assigns to the user utterances of the training data set: DISPLAYFORM0 ",
" The network was implemented in Tensorflow BIBREF36 and optimized using Tensorflow's default setup of the Adam optimizer BIBREF37 . The LSTM layers and the fully-connected layer had widths of 100 each to give a reasonable number of overall parameters. The width was not tuned. The learning rate was optimised on a held out validation set and no regularization methods used. The training set was shuffled at the dialogue turn level.",
"The manual transcriptions of the DSTC2 training set (not the ASR output) were used to train the sequence-to-sequence model. Since the transcriptions were done manually they contained spelling errors. These were manually corrected to ensure proper delexicalization. Some dialogues were discarded due to transcriptions errors being too large. After cleaning the dataset the training set consisted of 1609 dialogues with a total of 11638 dialogue turns. The validation set had 505 dialogues with 3896 dialogue turns. The maximum sequence length of the delexicalized turns was 22, including the end of sentence character. The maximum dialogue length was 30 turns.",
"All dialogue policies were trained with the PyDial toolkit BIBREF38 , by interacting with either the NUS or ABUS. The RL algorithm used is GP-SARSA BIBREF3 with hyperparameters taken from BIBREF39 . The reward function used gives a reward of 20 to a successfully completed dialogue and of -1 for each dialogue turn. The maximum dialogue length was 25 turns. The presented metrics are success rate (SR) and average reward over test dialogues. SR is the percentage of dialogues for which the system satisfied both the user's constraints and requests. The final goal, after possible goal changes, was used for this evaluation. When policies are trained using the NUS, its output is parsed using PyDial's regular expression based semantic decoder. The policies were trained for 4000 dialogues."
],
[
"The evaluation of user simulators is an ongoing area of research and a variety of techniques can be found in the literature. Most papers published on user simulation evaluate their US using direct methods. These methods evaluate the US through a statistical measure of similarity between the outputs of the US and a real user on a test set. Multiple models can outperform the ABUS on these metrics. However, this is unsurprising since these user simulators were trained on the same or similar metrics. The ABUS was explicitly proposed as a tool to train the policy of a dialogue manager and it is still the dominant form of US used for this task. Therefore, the only fair comparison between a new US model and the ABUS is to use the indirect method of evaluating the policies that were obtained by training with each US."
],
[
"In Schatzmann et. al schatztmann2005effects cross-model evaluation is proposed to compare user simulators. First, the user simulators to be evaluated are used to train INLINEFORM0 policy each. Then these policies are tested using the different user simulators and the results averaged. BIBREF7 showed that a strategy learned with a good user model still performs well when tested on poor user models. If a policy performs well on all user simulators and not just on the one that it was trained on, it indicates that the US with which it was trained is diverse and realistic, and thus the policy is likely to perform better on real users. For each US five policies ( INLINEFORM1 ), each using a different random seed for initialisation, are trained. Results are reported for both the best and the average performance on 1000 test dialogues. The ABUS is programmed to always mention the new goal after a goal change. In order to not let this affect our results we implement the same for the NUS by re-sampling a sentence if the new goal is not mentioned."
],
[
"Though the above test is already more indicative of policy performance on real users than measuring statistical metrics of user behaviour, a better test is to test with human users. For the test on human users, two policies for each US that was used for training are chosen from the five policies. The first policy is the one that performed best when tested on the NUS. The second is the one that performed best when tested on the ABUS. This choice of policies is motivated by a type of overfitting to be seen in Sec. SECREF17 . The evaluation of the trained dialogue policies in interaction with real users follows a similar set-up to BIBREF40 . Users are recruited through the Amazon Mechanical Turk (AMT) service. 1000 dialogues (250 per policy) were gathered. The learnt policies were incorporated into an SDS pipeline with a commercial ASR system. The AMT users were asked to find a restaurant that matches certain constraints and find certain requests. Subjects were randomly allocated to one of the four analysed systems. After each dialogue the users were asked whether they judged the dialogue to be successful or not which was then translated to the reward measure."
],
[
"Table TABREF18 shows the results of the cross-model evaluation after 4000 training dialogues. The policies trained with the NUS achieved an average success rate (SR) of 94.0% and of 96.6% when tested on the ABUS and the NUS, respectively. By comparison, the policies trained with the ABUS achieved average SRs of 99.5% and 45.5% respectively. Thus, training with the NUS leads to policies that can perform well on both USs, which is not the case for training with the ABUS. Furthermore, the best SRs when tested on the ABUS are similar at 99.9% (ABUS) and 99.8% (NUS). When tested on the NUS the best SRs were 71.5% (ABUS) and 98.0% (NUS). This shows that the behaviour of the Neural User Simulator is realistic and diverse enough to train policies that can also perform very well on the Agenda-Based User Simulator.",
"Of the five policies, for each US, the policy performing best on the NUS was not the best performing policy on the ABUS. This could indicate that the policy “overfits” to a particular user simulator. Overfitting usually manifests itself in worse results as the model is trained for longer. Five policies trained on each US for only 1000 dialogues were also evaluated, the results of which can be seen in Table TABREF19 . After training for 1000 dialogues, the average SR of the policies trained on the NUS when tested on the ABUS was 97.3% in comparison to 94.0% after 4000 dialogues. This behaviour was observed for all five seeds, which indicates that the policy indeed overfits to the NUS. For the policies trained with the ABUS this was not observed. This could indicate that the policy can learn to exploit some of the shortcomings of the trained NUS."
],
[
"The results of the human evaluation are shown in Table TABREF21 for 250 dialogues per policy. In Table TABREF21 policies are marked using an ID ( INLINEFORM0 ) that translates to results in Tables TABREF18 and TABREF19 . Both policies trained with the NUS outperformed those trained on the ABUS in terms of both reward and success rate. The best performing policy trained on the NUS achieves a 93.4% success rate and 13.8 average rewards whilst the best performing policy trained with the ABUS achieves only a 90.0% success rate and 13.3 average reward. This shows that the good performance of the NUS on the cross-model evaluation transfers to real users. Furthermore, the overfitting to a particular US is also observed in the real user evaluation. For not only the policies trained on the NUS, but also those trained on the ABUS, the best performing policy was the policy that performed best on the other US."
],
[
"We introduced the Neural User Simulator (NUS), which uses the system's response in its semantic form as input and gives a natural language response. It thus needs less labelling of the training data than User Simulators that generate a response in semantic form. It was shown that the NUS learns realistic user behaviour from a corpus of recorded dialogues such that it can be used to optimise the policy of the dialogue manager of a spoken dialogue system. The NUS was compared to the Agenda-Based User Simulator by evaluating policies trained with these user simulators. The trained policies were compared both by testing them with simulated users and also with real users. The NUS excelled on both evaluation tasks."
],
[
"This research was partly funded by the EPSRC grant EP/M018946/1 Open Domain Statistical Spoken Dialogue Systems. Florian Kreyssig is supported by the Studienstiftung des Deutschen Volkes. Paweł Budzianowski is supported by the EPSRC and Toshiba Research Europe Ltd."
]
],
"section_name": [
"Introduction",
"Task-Oriented Dialogue",
"Motivation and Related Work",
"Neural User Simulator",
"Goal Generator",
"Feature Extractor",
"Sequence-To-Sequence Model",
"Training",
"Experimental Setup",
"Testing with a simulated user",
"Testing with real users",
"Cross-Model Evaluation",
"Human Evaluation",
"Conclusion",
"Acknowledgements"
]
} | {
"answers": [
{
"annotation_id": [
"252e1a8433906e0e54ce95c585f721688a4fb8a7"
],
"answer": [
{
"evidence": [
"Table TABREF18 shows the results of the cross-model evaluation after 4000 training dialogues. The policies trained with the NUS achieved an average success rate (SR) of 94.0% and of 96.6% when tested on the ABUS and the NUS, respectively. By comparison, the policies trained with the ABUS achieved average SRs of 99.5% and 45.5% respectively. Thus, training with the NUS leads to policies that can perform well on both USs, which is not the case for training with the ABUS. Furthermore, the best SRs when tested on the ABUS are similar at 99.9% (ABUS) and 99.8% (NUS). When tested on the NUS the best SRs were 71.5% (ABUS) and 98.0% (NUS). This shows that the behaviour of the Neural User Simulator is realistic and diverse enough to train policies that can also perform very well on the Agenda-Based User Simulator."
],
"extractive_spans": [],
"free_form_answer": "Average success rate is higher by 2.6 percent points.",
"highlighted_evidence": [
"The policies trained with the NUS achieved an average success rate (SR) of 94.0% and of 96.6% when tested on the ABUS and the NUS, respectively."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"c1ed3e2a6e21d189790e8b23f0edc5422b51ec52",
"e89a4ff03c55b1be6dd8bff517f6f3f9822e4235"
],
"answer": [
{
"evidence": [
"In this paper the Neural User Simulator (NUS) is introduced which outputs natural language and whose behaviour is learned from a corpus. The main component, inspired by BIBREF4 , consists of a feature extractor and a neural network based sequence-to-sequence model BIBREF5 . The sequence-to-sequence model consists of a recurrent neural network (RNN) encoder that encodes the dialogue history and a decoder RNN which outputs natural language. Furthermore, the NUS generates its own goal and possibly changes it during a dialogue. This allows the model to be deployed for training more sophisticated DM policies. To achieve this, a method is proposed that transforms the goal-labels of the used dataset (DSTC2) into labels whose behaviour can be replicated during deployment."
],
"extractive_spans": [
"DSTC2"
],
"free_form_answer": "",
"highlighted_evidence": [
"To achieve this, a method is proposed that transforms the goal-labels of the used dataset (DSTC2) into labels whose behaviour can be replicated during deployment."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The manual transcriptions of the DSTC2 training set (not the ASR output) were used to train the sequence-to-sequence model. Since the transcriptions were done manually they contained spelling errors. These were manually corrected to ensure proper delexicalization. Some dialogues were discarded due to transcriptions errors being too large. After cleaning the dataset the training set consisted of 1609 dialogues with a total of 11638 dialogue turns. The validation set had 505 dialogues with 3896 dialogue turns. The maximum sequence length of the delexicalized turns was 22, including the end of sentence character. The maximum dialogue length was 30 turns."
],
"extractive_spans": [
"The manual transcriptions of the DSTC2 training set "
],
"free_form_answer": "",
"highlighted_evidence": [
"The manual transcriptions of the DSTC2 training set (not the ASR output) were used to train the sequence-to-sequence model. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
],
"nlp_background": [
"",
""
],
"paper_read": [
"",
""
],
"question": [
"by how much did nus outperform abus?",
"what corpus is used to learn behavior?"
],
"question_id": [
"7006c66a15477b917656f435d66f63760d33a304",
"a15bc19674d48cd9919ad1cf152bf49c88f4417d"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
""
],
"topic_background": [
"",
""
]
} | {
"caption": [
"Figure 1: General Architecture of the Neural User Simulator. The System Output is passed to the Feature Extractor. It generates a new feature vector that is appended to the Feature History, which is passed to the sequence-to-sequence model to produce the user utterance. At the start of the dialogue the Goal Generator generates a goal, which might change during the course of the dialogue.",
"Table 1: An example of how DSTC2’s turn-specific constraint labels can be transformed such that their behaviour can be replicated when training a dialogue manager.",
"Figure 2: Sequence-To-Sequence model of the Neural User Simulator. Here, the NUS is generating the user response to the third system output. The white, light-grey and dark blocks represent the RNN encoder, a fully-connected layer and the RNN decoder respectively. The previous output of the decoder is passed to its input for the next time-step. v3:1 are the first three feature vectors (see Sec. 4.2).",
"Table 2: Results for policies trained for 4000 dialogues on NUS and ABUS when tested on both USs for 1000 dialogues. Five policies with different initialisations were trained for each US. Both average and best results are shown.",
"Table 4: Real User Evaluation. Results over 250 dialogues with human users. N1 and A1 performed best on the NUS. N2 and A2 performed best on the ABUS. Rewards are not comparable to Table 2 and 3 since all user goals were achievable.",
"Table 3: As Table 2 but trained for 1000 dialogues."
],
"file": [
"3-Figure1-1.png",
"5-Table1-1.png",
"6-Figure2-1.png",
"8-Table2-1.png",
"8-Table4-1.png",
"8-Table3-1.png"
]
} | [
"by how much did nus outperform abus?"
] | [
[
"1805.06966-Cross-Model Evaluation-0"
]
] | [
"Average success rate is higher by 2.6 percent points."
] | 243 |
1806.03125 | Text Classification based on Word Subspace with Term-Frequency | Text classification has become indispensable due to the rapid increase of text in digital form. Over the past three decades, efforts have been made to approach this task using various learning algorithms and statistical models based on bag-of-words (BOW) features. Despite its simple implementation, BOW features lack semantic meaning representation. To solve this problem, neural networks started to be employed to learn word vectors, such as the word2vec. Word2vec embeds word semantic structure into vectors, where the angle between vectors indicates the meaningful similarity between words. To measure the similarity between texts, we propose the novel concept of word subspace, which can represent the intrinsic variability of features in a set of word vectors. Through this concept, it is possible to model text from word vectors while holding semantic information. To incorporate the word frequency directly in the subspace model, we further extend the word subspace to the term-frequency (TF) weighted word subspace. Based on these new concepts, text classification can be performed under the mutual subspace method (MSM) framework. The validity of our modeling is shown through experiments on the Reuters text database, comparing the results to various state-of-art algorithms. | {
"paragraphs": [
[
"Text classification has become an indispensable task due to the rapid growth in the number of texts in digital form available online. It aims to classify different texts, also called documents, into a fixed number of predefined categories, helping to organize data, and making easier for users to find the desired information. Over the past three decades, many methods based on machine learning and statistical models have been applied to perform this task, such as latent semantic analysis (LSA), support vector machines (SVM), and multinomial naive Bayes (MNB).",
"The first step in utilizing such methods to categorize textual data is to convert the texts into a vector representation. One of the most popular text representation models is the bag-of-words model BIBREF0 , which represents each document in a collection as a vector in a vector space. Each dimension of the vectors represents a term (e.g., a word, a sequence of words), and its value encodes a weight, which can be how many times the term occurs in the document.",
"Despite showing positive results in tasks such as language modeling and classification BIBREF1 , BIBREF2 , BIBREF3 , the BOW representation has limitations: first, feature vectors are commonly very high-dimensional, resulting in sparse document representations, which are hard to model due to space and time complexity. Second, BOW does not consider the proximity of words and their position in the text and consequently cannot encode the words semantic meanings.",
"To solve these problems, neural networks have been employed to learn vector representations of words BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 . In particular, the word2vec representation BIBREF8 has gained attention. Given a training corpus, word2vec can generate a vector for each word in the corpus that encodes its semantic information. These word vectors are distributed in such a way that words from similar contexts are represented by word vectors with high correlation, while words from different contexts are represented by word vectors with low correlation.",
"One crucial aspect of the word2vec representation is that arithmetic and distance calculation between two word vectors can be performed, giving information about their semantic relationship. However, rather than looking at pairs of word vectors, we are interested in studying the relationship between sets of vectors as a whole and, therefore, it is desirable to have a text representation based on a set of these word vectors.",
"To tackle this problem, we introduce the novel concept of word subspace. It is mathematically defined as a low dimensional linear subspace in a word vector space with high dimensionality. Given that words from texts of the same class belong to the same context, it is possible to model word vectors of each class as word subspaces and efficiently compare them in terms of similarity by using canonical angles between the word subspaces. Through this representation, most of the variability of the class is retained. Consequently, a word subspace can effectively and compactly represent the context of the corresponding text. We achieve this framework through the mutual subspace method (MSM) BIBREF9 .",
"The word subspace of each text class is modeled by applying PCA without data centering to the set of word vectors of the class. When modeling the word subspaces, we assume only one occurrence of each word inside the class.",
"However, as seen in the BOW approach, the frequency of words inside a text is an informative feature that should be considered. In order to introduce this feature in the word subspace modeling and enhance its performance, we further extend the concept of word subspace to the term-frequency (TF) weighted word subspace.",
"In this extension, we consider a set of weights, which encodes the words frequencies, when performing the PCA. Text classification with TF weighted word subspace can also be performed under the framework of MSM. We show the validity of our modeling through experiments on the Reuters database, an established database for natural language processing tasks. We demonstrate the effectiveness of the word subspace formulation and its extension, comparing our methods' performance to various state-of-art methods.",
"The main contributions of our work are:",
"The remainder of this paper is organized as follows. In Section \"Related Work\" , we describe the main works related to text classification. In Section \"Word subspace\" , we present the formulation of our proposed word subspace. In Section \"Conventional text classification methods\" , we explain how text classification with word subspaces is performed under the MSM framework. Then, we present the TF weighted word subspace extension in Section \"TF weighted word subspace\" . Evaluation experiments and their results are described in Section \"Experimental Evaluation\" . Further discussion is then presented in Section \"Discussion\" , and our conclusions are described in Section \"Conclusions and Future Work\" ."
],
[
"In this section, we outline relevant work towards text classification. We start by describing how text data is conventionally represented using the bag-of-words model and then follow to describe the conventional methods utilized in text classification."
],
[
"The bag-of-words representation comes from the hypothesis that frequencies of words in a document can indicate the relevance of the document to a query BIBREF0 , that is, if documents and a query have similar frequencies for the same words, they might have a similar meaning. This representation is based on the vector space model (VSM), that was developed for the SMART information retrieval system BIBREF10 . In the VSM, the main idea is that documents in a collection can be represented as a vector in a vector space, where vectors close to each other represent semantically similar documents.",
"More formally, a document $d$ can be represented by a vector in $\\mathbb {R}^{n}$ , where each dimension represents a different term. A term can be a single word, constituting the conventional bag-of-words, or combinations of $N$ words, constituting the bag-of-N-grams. If a term occurs in the document, its position in the vector will have a non-zero value, also known as term weight. Two documents in the VSM can be compared to each other by taking the cosine distance between them BIBREF1 .",
"There are several ways to compute the term weights. Among them, we can highlight some: Binary weights, term-frequency (TF) weights, and term-frequency inverse document-frequency (TF-IDF) weights.",
"Consider a corpus with documents $D = \\lbrace d_i\\rbrace _{i=1}^{|D|}$ and a vocabulary with all terms in the corpus $V = \\lbrace w_i\\rbrace _{i=1}^{|V|}$ . The term weights can be defined as:",
"Binary weight: If a term occurs in the document, its weight is 1. Otherwise, it is zero.",
"Term-frequency weight (TF): The weight of a term $w$ is defined by the number of times it occurs in the document $d$ . ",
"$$TF(w,d) = n_d^w$$ (Eq. 8) ",
"Inverse document-frequency: The weight of a term $w$ , given the corpus $D$ , is defined as the total number of documents $|D|$ divided by the number of documents that have the term $w$ , $|D^w|$ . ",
"$$IDF(w | D) = \\frac{|D|}{|D^w|}$$ (Eq. 10) ",
"Term-frequency inverse document-frequency (TF-IDF): The weight of a term $w$ is defined by the multiplication of its term-frequency and its inverse document-frequency. When considering only the TF weights, all terms have the same importance among the corpus. By using the IDF weight, words that are more common across all documents in $D$ receive a smaller weight, giving more importance to rare terms in the corpus. ",
"$$TFIDF(w,d | D)=TF \\times IDF$$ (Eq. 12) ",
"In very large corpus, it is common to consider the logarithm of the IDF in order to dampen its effect. ",
"$$TFIDF(w,d | D)=TF \\times log_{10}(IDF)$$ (Eq. 13) "
],
[
"Multi-variate Bernoulli (MVB) and multinomial naive Bayes (MNB) are two generative models based on the naive Bayes assumption. In other words, they assume that all attributes (e.g., the frequency of each word, the presence or absence of a word) of each text are independent of each other given the context of the class BIBREF11 .",
"In the MVB model, a document is represented by a vector generated by a bag-of-words with binary weights. In this case, a document can be considered an event, and the presence or the absence of the words to be the attributes of the event. On the other hand, the MNB model represents each document as a vector generated by a bag-of-words with TF weights. Here, the individual word occurrences are considered as events and the document is a collection of word events.",
"Both these models use the Bayes rule to classify a document. Consider that each document should be classified into one of the classes in $C=\\lbrace c_j\\rbrace _{j=1}^{|C|}$ . The probability of each class given the document is defined as: ",
"$$P(c_j|d_i) = \\frac{P(d_i|c_j)P(c_j)}{P(d_i)}.$$ (Eq. 16) ",
"The prior $P(d_i)$ is the same for all classes, so to determine the class to which $d_i$ belongs to, the following equation can be used: ",
"$$prediction(d_i) = argmax_{c_j}P(d_i|c_j)P(c_j)$$ (Eq. 17) ",
"The prior $P(c_j)$ can be obtained by the following equation: ",
"$$P(c_j) = \\frac{1+|D_j|}{|C|+|D|},$$ (Eq. 18) ",
"where $|D_j|$ is the number of documents in class $c_j$ .",
"As for the posterior $P(d_i|c_j)$ , different calculations are performed for each model. For MVB, it is defined as: ",
"$$P(d_i|c_j) = \\prod _{k=1}^{|V|}P(w_k|c_j)^{t_i^k}(1-P(w_k|c_j))^{1-t_i^k},$$ (Eq. 19) ",
"where $w_k$ is the k-th word in the vocabulary $V$ , and $t_i^k$ is the value (0 or 1) of the k-th element of the vector of document $d_i$ .",
"For the MNB, it is defined as: ",
"$$P(d_i|c_j) = P(|d_i|)|d_i|!\\prod _{k=1}^{|V|}\\frac{P(w_k|c_j)^{n_i^k}}{n_i^k!},$$ (Eq. 20) ",
"where $|d_i|$ is the number of words in document $d_i$ and $n_i^k$ is the k-th element of the vector of document $d_i$ and it represents how many times word $w_k$ occurs in $d_i$ .",
"Finally, the posterior $P(w_k|c_j)$ can be obtained by the following equation: ",
"$$P(w_k|c_j) = \\frac{1+|D_j^k|}{|C|+|D|},$$ (Eq. 21) ",
"where $|D_j^k|$ is the number of documents in class $c_j$ that contain the word $w_k$ .",
"In general, MVB tends to perform better than MNB at small vocabulary sizes whereas MNB is more efficient on large vocabularies.",
"Despite being robust tools for text classification, both these models depend directly on the bag-of-words features and do not naturally work with representations such as word2vec.",
"Latent semantic analysis (LSA), or latent semantic indexing (LSI), was proposed in BIBREF12 , and it extends the vector space model by using singular value decomposition (SVD) to find a set of underlying latent variables which spans the meaning of texts.",
"It is built from a term-document matrix, in which each row represents a term, and each column represents a document. This matrix can be built by concatenating the vectors of all documents in a corpus, obtained using the bag-of-words model, that is, $ {X} = [ {v}_1, {v}_2, ..., {v}_{|D|}]$ , where ${v}_i$ is the vector representation obtained using the bag-of-words model.",
"In this method, the term-document matrix is decomposed using the singular value decomposition, ",
"$${X} = {U\\Sigma V}^\\top ,$$ (Eq. 23) ",
"where $U$ and $V$ are orthogonal matrices and correspond to the left singular vectors and right singular vectors of $X$ , respectively. $\\Sigma $ is a diagonal matrix, and it contains the square roots of the eigenvalues of $X^TX$ and $XX^T$ . LSA finds a low-rank approximation of $X$ by selecting only the $k$ largest singular values and its respective singular vectors, ",
"$${X}_k = {U}_k{\\Sigma }_k {V}_k^{\\top }.$$ (Eq. 24) ",
"To compare two documents, we project both of them into this lower dimension space and calculate the cosine distance between them. The projection ${\\hat{d}}$ of document ${d}$ is obtained by the following equation: ",
"$${\\hat{d}} = {\\Sigma }_k^{-1} {U}_k^\\top {d}.$$ (Eq. 25) ",
"Despite its extensive application on text classification BIBREF13 , BIBREF14 , BIBREF15 , this method was initially proposed for document indexing and, therefore, does not encode any class information when modeling the low-rank approximation. To perform classification, 1-nearest neighbor is usually performed, placing a query document into the class of the nearest training document.",
"The support vector machine (SVM) was first presented in BIBREF16 and performs the separation between samples of two different classes by projecting them onto a higher dimensionality space. It was first applied in text classification by BIBREF17 and have since been successfully applied in many tasks related to natural language processing BIBREF18 , BIBREF19 .",
"Consider a training data set $D$ , with $n$ samples ",
"$$D = \\lbrace ({x}_i,c_i)|{x}_i\\in \\mathbb {R}^p, c_i \\in \\lbrace -1,1\\rbrace \\rbrace _{i=1}^{n},$$ (Eq. 27) ",
"where $c_i$ represents the class to which ${x}_i$ belongs to. Each ${x}_i$ is a $p$ -dimensional vector. The goal is to find the hyperplane that divides the points from $c_i = 1$ from the points from $c_i = -1$ . This hyperplane can be written as a set of points $x$ satisfying: ",
"$${w} \\cdot {x} - b = 0,$$ (Eq. 28) ",
"where $\\cdot $ denotes the dot product. The vector ${w}$ is perpendicular to the hyperplane. The parameter $\\frac{b}{\\Vert {w}\\Vert }$ determines the offset of the hyperplane from the origin along the normal vector ${w}$ .",
"We wish to choose ${w}$ and $b$ , so they maximize the distance between the parallel hyperplanes that are as far apart as possible, while still separating the data.",
"If the training data is linearly separable, we can select two hyperplanes in a way that there are no points between them and then try to maximize the distance. In other words, minimize $\\Vert {w}\\Vert $ subject to $c_i({w}\\cdot {x}_u-b) \\ge 1, i=\\lbrace 1,2,...,n\\rbrace $ . If the training data is not linearly separable, the kernel trick can be applied, where every dot product is replaced by a non-linear kernel function."
],
[
"All methods mentioned above utilize the BOW features to represent a document. Although this representation is simple and powerful, its main problem lies on disregarding the word semantics within a document, where the context and meaning could offer many benefits to the model such as identification of synonyms.",
"In our formulation, words are represented as vectors in a real-valued feature vector space $\\mathbb {R}^{p}$ , by using word2vec BIBREF8 . Through this representation, it is possible to calculate the distance between two words, where words from similar contexts are represented by vectors close to each other, while words from different contexts are represented as far apart vectors. Also, this representation brings the new concept of arithmetic operations between words, where operations such as addition and subtraction carry meaning (eg., “king”-“man”+“woman”=“queen”) BIBREF20 .",
"Consider a set of documents which belong to the same context $D_c = \\lbrace d_i\\rbrace _{i=1}^{|D_c|}$ . Each document $d_i$ is represented by a set of $N_i$ words, $d_i = \\lbrace w_k\\rbrace _{k=1}^{N_i}$ . By considering that all words from documents of the same context belong to the same distribution, a set of words $W_c = \\lbrace w_k\\rbrace _{k=1}^{N_c}$ with the words in the context $c$ is obtained.",
"We then translate these words into word vectors using word2vec, resulting in a set of word vectors $X_c = \\lbrace {x}^k_c\\rbrace _{k=1}^{N_c} \\in \\mathbb {R}^p$ . This set of word vectors is modeled into a word subspace, which is a compact, scalable and meaningful representation of the whole set. Such a word subspace is generated by applying PCA to the set of word vectors.",
"First, we compute an autocorrelation matrix, ${R}_c$ : ",
"$${R}_c = \\frac{1}{N_c}\\sum _{i=1}^{N_c}{x}^{i}_c{x}_c^{i^{\\top }}.$$ (Eq. 29) ",
"The orthonormal basis vectors of $m_c$ -dimensional subspace ${Y}_c$ are obtained as the eigenvectors with the $m_c$ largest eigenvalues of the matrix ${R}_c$ . We represent a subspace ${Y}_c$ by the matrix ${Y}_c \\in \\mathbb {R}^{p \\times m_c}$ , which has the corresponding orthonormal basis vectors as its column vectors."
],
[
"We formulate our problem as a single label classification problem. Given a set of training documents, which we will refer as corpus, $D = \\lbrace d_i\\rbrace _{i=1}^{|D|}$ , with known classes $C = \\lbrace c_j\\rbrace _{j=1}^{|C|}$ , we wish to classify a query document $d_q$ into one of the classes in $C$ .",
"Text classification based on word subspace can be performed under the framework of mutual subspace method (MSM). This task involves two different stages: A learning stage, where the word subspace for each class is modeled, and a classification stage, where the word subspace for a query is modeled and compared to the word subspaces of the classes.",
"In the learning stage, it is assumed that all documents of the same class belong to the same context, resulting in a set of words $W_c = \\lbrace w_c^k\\rbrace _{k=1}^{N_c}$ . This set assumes that each word appears only once in each class. Each set $\\lbrace W_c\\rbrace _{c=1}^{|C|}$ is then modeled into a word subspace ${Y}_c$ , as explained in Section \"Word subspace\" . As the number of words in each class may vary largely, the dimension $m_c$ of each class word subspace is not set to the same value.",
"In the classification stage, for a query document $d_q$ , it is also assumed that each word occurs only once, generating a subspace ${Y}_q$ .",
"To measure the similarity between a class word subspace ${Y}_c$ and a query word subspace ${Y}_q$ , the canonical angles between the two word subspaces are used BIBREF21 . There are several methods for calculating canonical angles BIBREF22 , BIBREF23 , and BIBREF24 , among which the simplest and most practical is the singular value decomposition (SVD). Consider, for example, two subspaces, one from the training data and another from the query, represented as matrices of bases, ${Y}_{c} = [{\\Phi }_{1} \\ldots {\\Phi }_{m_c}] \\in \\mathbb {R}^{p \\times m_c}$ and ${Y}_{q} = [{\\Psi }_{1} \\ldots {\\Psi }_{m_q}] \\in \\mathbb {R}^{p \\times m_q}$ , where ${\\Phi }_{i}$ are the bases for ${Y}_c$ and ${\\Psi }_{i}$ are the bases for ${Y}_q$ . Let the SVD of ${Y}_c^{\\top }{Y}_q \\in \\mathbb {R}^{m_c \\times m_q}$ be ${Y}_c^{\\top }{Y}_q = {U \\Sigma V}^{\\top }$ , where ${Y}_q$0 , ${Y}_q$1 represents the set of singular values. The canonical angles ${Y}_q$2 can be obtained as ${Y}_q$3 ${Y}_q$4 . The similarity between the two subspaces is measured by ${Y}_q$5 angles as follows: ",
"$$S_{({Y}_c,{Y}_q)}[t] = \\frac{1}{t}\\sum _{i = 1}^{t} \\cos ^{2} \\theta _{i},\\; 1 \\le t \\le m_q, \\; m_q \\le m_c.$$ (Eq. 30) ",
"Fig. 1 shows the modeling and comparison of sets of words by MSM. This method can compare sets of different sizes, and naturally encodes proximity between sets with related words.",
"Finally, the class with the highest similarity with $d_q$ is assigned as the class of $d_q$ : ",
"$$prediction(d_q) = argmax_c(S_{({Y}_c,{Y}_q)}).$$ (Eq. 32) "
],
[
"The word subspace formulation presented in Section \"Word subspace\" is a practical and compact way to represent sets of word vectors, retaining most of the variability of features. However, as seen in the BOW features, the frequency of words is relevant information that can improve the characterization of a text. To incorporate this information into the word subspace modeling, we propose an extension of the word subspace, called the term-frequency (TF) weighted word subspace.",
"Like the word subspace, the TF weighted word subspace is mathematically defined as a low-dimensional linear subspace in a word vector space with high dimensionality. However, a weighted version of the PCA BIBREF25 , BIBREF26 is utilized to incorporate the information given by the frequencies of words (term-frequencies). This TF weighted word subspace is equivalent to the word subspace if we consider all occurrences of the words.",
"Consider the set of word vectors $\\lbrace {x}_c^k\\rbrace _{k=1}^{N_c} \\in \\mathbb {R}^{p}$ , which represents each word in the context $c$ , and the set of weights $\\lbrace \\omega _i\\rbrace _{i=1}^{N_c}$ , which represent the frequencies of the words in the context $c$ .",
"We incorporate these frequencies into the subspace calculation by weighting the data matrix ${X}$ as follows: ",
"$${\\widetilde{X}}={X}{\\Omega }^{1/2},$$ (Eq. 33) ",
"where ${X} \\in \\mathbb {R}^{p \\times N_c}$ is a matrix containing the word vectors $\\lbrace {x}_c^k\\rbrace _{k=1}^{N_c}$ and ${\\Omega }$ is a diagonal matrix containing the weights $\\lbrace \\omega _i\\rbrace _{i=1}^{N_c}$ .",
"We then perform PCA by solving the SVD of the matrix ${\\widetilde{X}}$ : ",
"$${\\widetilde{X}}={AMB}^{\\top },$$ (Eq. 34) ",
"where the columns of the orthogonal matrices ${A}$ and ${B}$ are, respectively, the left-singular vectors and right-singular vectors of the matrix ${\\widetilde{X}}$ , and the diagonal matrix ${M}$ contains singular values of ${\\widetilde{X}}$ .",
"Finally, the orthonormal basis vectors of the $m_c$ -dimensional TF weighted subspace ${W}$ are the column vectors in ${A}$ corresponding to the $m_c$ largest singular values in ${M}$ .",
"Text classification with TF weighted word subspace can also be performed under the framework of MSM. In this paper, we will refer to MSM with TF weighted word subspace as TF-MSM."
],
[
"In this section we describe the experiments performed to demonstrate the validity of our proposed method and its extension. We used the Reuters-8 dataset without stop words from BIBREF27 aiming at single-label classification, which is a preprocessed format of the Reuters-21578. Words in the texts were considered as they appeared, without performing stemming or typo correction. This database has eight different classes with the number of samples varying from 51 to over 3000 documents, as can be seen in Table 1 .",
"To obtain the vector representation of words, we used a freely available word2vec model, trained by BIBREF8 , on approximately 100 billion words, which encodes the vector representation in $\\mathbb {R}^{300}$ of over 3 million words from several different languages. Since we decided to focus on English words only, we filtered these vectors to about 800 thousand words, excluding all words with non-roman characters.",
"To show the validity of our word subspace representation for text classification and the proposed extension, we divided our experiment section into two parts: The first one aims to verify if sets of word vectors are suitable for subspace representation, and the second one puts our methods in practice in a text classification test, comparing our results with the conventional methods described in Section \"Related Work\" ."
],
[
"In this experiment, we modeled the word vectors from each class in the Reuters-8 database into a word subspace. The primary goal is to visualize how much of the text data can be represented by a lower dimensional subspace.",
"Subspace representations are very efficient in compactly represent data that is close to a normal distribution. This characteristic is due to the application of the PCA, that is optimal to find the direction with the highest variation within the data.",
"In PCA, the principal components give the directions of maximum variance, while their corresponding eigenvalues give the variance of the data in each of them. Therefore, by observing the distribution of the eigenvalues computed when performing PCA in the modeling of the subspaces, we can suggest if the data is suitable or not for subspace representation.",
"For each class, we normalized the eigenvalues by the largest one of the class. Fig. 2 shows the mean of the eigenvalues and the standard deviation among classes. It is possible to see that the first largest eigenvalues retain larger variance than the smallest ones. In fact, looking at the first 150 largest eigenvalues, we can see that they retain, on average, 86.37% of the data variance. Also, by observing the standard deviation, we can understand that the eigenvalues distribution among classes follows the same pattern, that is, most of the variance is in the first dimensions. This plot indicates that text data represented by vectors generated with word2vec is suitable for subspace representation."
],
[
"In this experiment, we performed text classification among the classes in the Reuters-8 database. We compared the classification using the word subspace, and its weighted extension, based on MSM (to which we will refer as MSM and TF-MSM, respectively) with the baselines presented in Section \"Related Work\" : MVB, MNB, LSA, and SVM. Since none of the baseline methods work with vector set classification, we also compared to a simple baseline for comparing sets of vectors, defined as the average of similarities between all vector pair combinations of two given sets. For two matrices ${A}$ and ${B}$ , containing the sets of vectors $\\lbrace {x}^{i}_a \\rbrace _{i = 1}^{N_A}$ and $\\lbrace {x}^{i}_b \\rbrace _{i = 1}^{N_B}$ , respectively, where $N_A$ and $N_B$ are the number of main words in each set, the similarity is defined as: ",
"$$Sim_{(A,B)} = \\frac{1}{N_A N_B}\\sum _{i}^{N_A}\\sum _{j}^{N_B}{{x}_a^i}^{\\top }{x}_b^j.$$ (Eq. 41) ",
"We refer to this baseline as similarity average (SA). For this method, we only considered one occurrence of each word in each set.",
"Different features were used, depending on the method. Classification with SA, MSM, and TF-MSM was performed using word2vec features, to which we refer as w2v. For MVB, due to its nature, only bag-of-words features with binary weights were used (binBOW). For the same reason, we only used bag-of-words features with term-frequency weights (tfBOW) with MNB. Classification with LSA and SVM is usually performed using bag-of-words features and, therefore, we tested with binBOW, tfBOW, and with the term-frequency inverse document-frequency weight, tfidfBOW. We also tested them using word2vec vectors. In this case, we considered each word vector from all documents in each class to be a single sample.",
"To determine the dimensions of the class subspaces and query subspace of MSM and TF-MSM, and the dimension of the approximation performed by LSA, we performed a 10-fold cross validation, wherein each fold, the data were randomly divided into train (60%), validation (20%) and test set (20%).",
"The results can be seen in Table 2 . The simplest baseline, SA with w2v, achieved an accuracy rate of 78.73%. This result is important because it shows the validity of the word2vec representation, performing better than more elaborate methods based on BOW, such as MVB with binBOW.",
"LSA with BOW features was almost 10% more accurate than SA, where the best results with binary weights were achieved with an approximation with 130 dimensions, with TF weights were achieved with 50 dimensions, and with TF-IDF weights were achieved with 30 dimensions. SVM with BOW features was about 3% more accurate than LSA, with binary weights leading to a higher accuracy rate.",
"It is interesting to note that despite the reasonably high accuracy rates achieved using LSA and SVM with BOW features, they poorly performed when using w2v features.",
"Among the baselines, the best method was MNB with tfBOW features, with an accuracy of 91.47%, being the only conventional method to outperform MSM. MSM with w2v had an accuracy rate of 90.62%, with the best results achieved with word subspace dimensions for the training classes ranging from 150 to 181, and for the query ranging from 3 to 217. Incorporating the frequency information in the subspace modeling resulted in higher accuracy, with TF-MSM achieving 92.01%, with dimensions of word subspaces for training classes ranging from 150 to 172, and for the query, ranging from 2 to 109. To confirm that TF-MSM is significantly more accurate than MNB, we performed a t-test to compare their results. It resulted in a p-value of 0.031, which shows that at a 95% significance level, TF-MSM has produced better results."
],
[
"Given the observation of the eigenvalues distribution of word vectors, we could see that word vectors that belong to the same context, i.e., same class, are suitable for subspace representation. Our analysis showed that half of the word vector space dimensions suffice to represent most of the variability of the data in each class of the Reuters-8 database.",
"The results from the text classification experiment showed that subspace-based methods performed better than the text classification methods discussed in this work. Ultimately, our proposed TF weighted word subspace with MSM surpassed all the other methods. word2vec features are reliable tools to represent the semantic meaning of the words and when treated as sets of word vectors, they are capable of representing the content of texts. However, despite the fact that word vectors can be treated separately, conventional methods such as SVM and LSA may not be suitable for text classification using word vectors.",
"Among the conventional methods, LSA and SVM achieved about 86% and 89%, respectively, when using bag-of-words features. Interestingly, both methods had better performance when using binary weights. For LSA, we can see that despite the slight differences in the performance, tfidfBOW required approximations with smaller dimensions. SVM had the lowest accuracy rate when using the tfidfBOW features. One possible explanation for this is that TF-IDF weights are useful when rare words and very frequent words exist in the corpus, giving higher weights for rare words and lower weights for common words. Since we removed the stop words, the most frequent words among the training documents were not considered and, therefore, using TF-IDF weights did not improve the results.",
"Only MNB with tfBOW performed better than MSM. This result may be because tfBOW features encode the word frequencies, while MSM only considers a single occurrence of words. When incorporating the word frequencies with our TF weighted word subspace, we achieved a higher accuracy of 92.01%, performing better than MNB at a significance level of 95%."
],
[
"In this paper, we proposed a new method for text classification, based on the novel concept of word subspace under the MSM framework. We also proposed the term-frequency weighted word subspace which can incorporate the frequency of words directly in the modeling of the subspace by using a weighted version of PCA.",
"Most of the conventional text classification methods are based on the bag-of-words features, which are very simple to compute and had been proved to produce positive results. However, bag-of-words are commonly high dimensional models, with a sparse representation, which is computationally heavy to model. Also, bag-of-words fail to convey the semantic meaning of words inside a text. Due to these problems, neural networks started to be applied to generate a vector representation of words. Despite the fact that these representations can encode the semantic meaning of words, conventional methods do not work well when considering word vectors separately.",
"In our work, we focused on the word2vec representation, which can embed the semantic structure of words, rendering vector angles as a useful metric to show meaningful similarities between words. Our experiments showed that our word subspace modeling along with the MSM outperforms most of the conventional methods. Ultimately, our TF weighted subspace formulation resulted in significantly higher accuracy when compared to all conventional text classification methods discussed in this work. It is important to note that our method does not consider the order of the words in a text, resulting in a loss of context information. As a future work, we wish to extend our word subspace concept further in mainly two directions. First, we seek to encode word order, which may enrich the representation of context information. Second, we wish to model dynamic context change, enabling analysis of large documents, by having a long-short memory to interpret information using cues from different parts of a text."
],
[
"This work is supported by JSPS KAKENHI Grant Number JP16H02842 and the Japanese Ministry of Education, Culture, Sports, Science, and Technology (MEXT) scholarship."
]
],
"section_name": [
"Introduction",
"Related Work",
"Text Representation with bag-of-words",
"Conventional text classification methods",
"Word subspace",
"Text classification based on word subspace",
"TF weighted word subspace",
"Experimental Evaluation",
"Evaluation of the word subspace representation",
"Text classification experiment",
"Discussion",
"Conclusions and Future Work",
"Acknowledgment"
]
} | {
"answers": [
{
"annotation_id": [
"254f8c852f0c98665a08099e4dc2fbfacdd67b3d",
"3d7c9dc0e6dc838ce315d6eaf61955465c056b73"
],
"answer": [
{
"evidence": [
"In this section we describe the experiments performed to demonstrate the validity of our proposed method and its extension. We used the Reuters-8 dataset without stop words from BIBREF27 aiming at single-label classification, which is a preprocessed format of the Reuters-21578. Words in the texts were considered as they appeared, without performing stemming or typo correction. This database has eight different classes with the number of samples varying from 51 to over 3000 documents, as can be seen in Table 1 ."
],
"extractive_spans": [
"Reuters-8 dataset without stop words"
],
"free_form_answer": "",
"highlighted_evidence": [
"We used the Reuters-8 dataset without stop words from BIBREF27 aiming at single-label classification, which is a preprocessed format of the Reuters-21578."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In this section we describe the experiments performed to demonstrate the validity of our proposed method and its extension. We used the Reuters-8 dataset without stop words from BIBREF27 aiming at single-label classification, which is a preprocessed format of the Reuters-21578. Words in the texts were considered as they appeared, without performing stemming or typo correction. This database has eight different classes with the number of samples varying from 51 to over 3000 documents, as can be seen in Table 1 ."
],
"extractive_spans": [],
"free_form_answer": "The Reuters-8 dataset (with stop words removed)",
"highlighted_evidence": [
"We used the Reuters-8 dataset without stop words from BIBREF27 aiming at single-label classification, which is a preprocessed format of the Reuters-21578. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5d0eb97e8e840e171f73b7642c2c89dd3984157b",
"ea17aa5ea17e7838a55c252484390079b16c31ae"
]
},
{
"annotation_id": [
"339dccb7dc31e8473268f27cec6f984f24ee876e"
],
"answer": [
{
"evidence": [
"To tackle this problem, we introduce the novel concept of word subspace. It is mathematically defined as a low dimensional linear subspace in a word vector space with high dimensionality. Given that words from texts of the same class belong to the same context, it is possible to model word vectors of each class as word subspaces and efficiently compare them in terms of similarity by using canonical angles between the word subspaces. Through this representation, most of the variability of the class is retained. Consequently, a word subspace can effectively and compactly represent the context of the corresponding text. We achieve this framework through the mutual subspace method (MSM) BIBREF9 ."
],
"extractive_spans": [],
"free_form_answer": "Word vectors, usually in the context of others within the same class",
"highlighted_evidence": [
"Given that words from texts of the same class belong to the same context, it is possible to model word vectors of each class as word subspaces and efficiently compare them in terms of similarity by using canonical angles between the word subspaces. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"ea17aa5ea17e7838a55c252484390079b16c31ae"
]
}
],
"nlp_background": [
"two",
"two"
],
"paper_read": [
"no",
"no"
],
"question": [
"Which dataset has been used in this work?",
"What can word subspace represent?"
],
"question_id": [
"440faf8d0af8291d324977ad0f68c8d661fe365e",
"0ec56e15005a627d0b478a67fd627a9d85c3920e"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"",
""
],
"topic_background": [
"familiar",
"familiar"
]
} | {
"caption": [
"Fig. 1. Comparison of sets of word vectors by the mutual subspace method.",
"Fig. 2. Eigenvalue distribution on word subspaces.",
"TABLE II RESULTS FOR THE TEXT CLASSIFICATION EXPERIMENT ON REUTERS-8 DATABASE"
],
"file": [
"5-Figure1-1.png",
"6-Figure2-1.png",
"6-TableII-1.png"
]
} | [
"Which dataset has been used in this work?",
"What can word subspace represent?"
] | [
[
"1806.03125-Experimental Evaluation-0"
],
[
"1806.03125-Introduction-5"
]
] | [
"The Reuters-8 dataset (with stop words removed)",
"Word vectors, usually in the context of others within the same class"
] | 244 |
1910.14076 | A Neural Topic-Attention Model for Medical Term Abbreviation Disambiguation | Automated analysis of clinical notes is attracting increasing attention. However, there has not been much work on medical term abbreviation disambiguation. Such abbreviations are abundant, and highly ambiguous, in clinical documents. One of the main obstacles is the lack of large scale, balance labeled data sets. To address the issue, we propose a few-shot learning approach to take advantage of limited labeled data. Specifically, a neural topic-attention model is applied to learn improved contextualized sentence representations for medical term abbreviation disambiguation. Another vital issue is that the existing scarce annotations are noisy and missing. We re-examine and correct an existing dataset for training and collect a test set to evaluate the models fairly especially for rare senses. We train our model on the training set which contains 30 abbreviation terms as categories (on average, 479 samples and 3.24 classes in each term) selected from a public abbreviation disambiguation dataset, and then test on a manually-created balanced dataset (each class in each term has 15 samples). We show that enhancing the sentence representation with topic information improves the performance on small-scale unbalanced training datasets by a large margin, compared to a number of baseline models. | {
"paragraphs": [
[
"Medical text mining is an exciting area and is becoming attractive to natural language processing (NLP) researchers. Clinical notes are an example of text in the medical area that recent work has focused on BIBREF0, BIBREF1, BIBREF2. This work studies abbreviation disambiguation on clinical notes BIBREF3, BIBREF4, specifically those used commonly by physicians and nurses. Such clinical abbreviations can have a large number of meanings, depending on the specialty BIBREF5, BIBREF6. For example, the term MR can mean magnetic resonance, mitral regurgitation, mental retardation, medical record and the general English Mister (Mr.). Table TABREF1 illustrates such an example. Abbreviation disambiguation is an important task in medical text understanding task BIBREF7. Successful recognition of the abbreviations in the notes can contribute to downstream tasks such as classification, named entity recognition, and relation extraction BIBREF7.",
"Recent work focuses on formulating the abbreviation disambiguation task as a classification problem, where the possible senses of a given abbreviation term are pre-defined with the help of domain experts BIBREF6, BIBREF5. Traditional features such as part-of-speech (POS) and Term Frequency-Inverse Document Frequency (TF-IDF) are widely investigated for clinical notes classification. Classifiers like support vector machines (SVMs) and random forests (RFs) are used to make predictions BIBREF1. Such methods depend heavily on feature engineering. Recently, deep features have been investigated in the medical domain. Word embeddings BIBREF8 and Convolutional Neural Networks (CNNs) BIBREF9, BIBREF10 provide very competitive performance on text classification for clinical notes and abbreviation disambiguation task BIBREF0, BIBREF11, BIBREF12, BIBREF6. Beyond vanilla embeddings, BIBREF13 utilized contextual features to do abbreviation expansion. Another challenge is the difficulty in obtaining training data: clinical notes have many restrictions due to privacy issues and require domain experts to develop high-quality annotations, thus leading to limited annotated training and testing data. Another difficulty is that in the real world (and in the existing public datasets), some abbreviation term-sense pairs (for example, AB as abortion) have a very high frequency of occurrence BIBREF5, while others are rarely found. This long tail issue creates the challenge of training under unbalanced sample distributions. We tackle these problems in the setting of few-shot learning BIBREF14, BIBREF15 where only a few or a low number of samples can be found in the training dataset to make use of limited resources. We propose a model that combines deep contextual features based on ELMo BIBREF16 and topic information to solve the abbreviation disambiguation task.",
"Our contributions can be summarized as: 1) we re-examined and corrected an existing dataset for training and we collected a test set for evaluation with focus especially for rare senses; 2) we proposed a few-shot learning approach which combines topic information and contextualized word embeddings to solve clinical abbreviation disambiguation task. The implementation is available online; 3) as limited research are conducted on this particular task, we evaluated and compared a number of baseline methods including classical models and deep models comprehensively."
],
[
"",
"Training Dataset UM Inventory BIBREF5 is a public dataset created by researchers from the University of Minnesota, containing about 37,500 training samples with 75 abbreviation terms. Existing work reports abbreviation disambiguation results on 50 abbreviation terms BIBREF6, BIBREF5, BIBREF17. However, after carefully reviewing this dataset, we found that it contains many samples where medical professionals disagree: wrong samples and uncategorized samples. Due to these mistakes and flaws of this dataset, we removed the erroneous samples and eventually selected 30 abbreviation terms as our training dataset that can be made public. Among all the abbreviation terms, we have 11,466 samples, and 93 term-sense pairs in total (on average 123.3 samples/term-sense pair and 382.2 samples/term). Some term-sense pairs are very popular with a larger number of training samples but some are not (typically less than 5), we call them rare-sense cases . More details can be found in Appendix SECREF7.",
"Testing Dataset Our testing dataset takes MIMIC-III BIBREF18 and PubMed as data sources. Here we are referring to all the notes data from MIMIC-III (NOTEEVENTS table ) and Case Reports articles from PubMed, as this type of contents are close to medical notes. We provide detailed information in Appendix SECREF8, including the steps to create the testing dataset. Eventually, we have a balanced testing dataset, where each term-sense pair has at least 11 and up to 15 samples for training (on average, each pair has 14.56 samples and the median sample number is 15)."
],
[
"We conducted a comprehensive comparison with the baseline models, and some of them were never investigated for the abbreviation disambiguation task. We applied traditional features by simply taking the TF-IDF features as the inputs into the classic classifiers. Deep features are also considered: a Doc2vec model BIBREF19 was pre-trained using Gensim and these word embeddings were applied to initialize deep models and fine-tuned.",
"TF-IDF: We applied TF-IDF features as inputs to four classifiers: support vector machine classifier (SVM), logistic regression classifier (LR), Naive Bayes classifier (NB) and random forest (RF); CNN: We followed the same architecture as BIBREF9; LSTM: We applied an LSTM model BIBREF20 to classify the sentences with pre-trained word embeddings;LSTM-soft: We then added a soft-attention BIBREF21 layer on top of the LSTM model where we computed soft alignment scores over each of the hidden states; LSTM-self: We applied a self-attention layer BIBREF22 to LSTM model. We denote the content vector as $c_i$ for each sentence $i$, as the input to the last classification layer."
],
[
"",
"ELMo ELMo is a new type of word representation introduced by BIBREF16 which considers contextual information captured by pre-trained BiLSTM language models. Similar works like BERT BIBREF23 and BioBERT BIBREF24 also consider context but with deeper structures. Compared with those models, ELMo has less parameters and not easy to be overfitting. We trained our ELMo model on the MIMIC-III corpus. Since some sentences also appear in the test set, one may raise the concern of performance inflation in testing. However, we pre-trained ELMo using the whole corpus of MIMIC in an unsupervised way, so the classification labels were not visible during training. To train the ELMo model, we adapted code from AllenNLP, we set the word embedding dimension to 64 and applied two BiLSTM layers. For all of our experiments that involved with ELMo, we initialized the word embeddings from the last layer of ELMo.",
"Topic-attention We propose a neural topic-attention model for text classification. Our assumption is that the topic information plays an important role in doing the disambiguation given a sentence. For example, the abbreviation of the medical term FISH has two potential senses: general English fish as seafood and the sense of fluorescent in situ hybridization BIBREF17. The former case always goes with the topic of food, allergies while the other one appears in the topic of some examination test reports. In our model, a topic-attention module is applied to add topic information into the final sentence representation. This module calculates the distribution of topic-attention weights on a list of topic vectors. As shown in Figure SECREF5, we took the content vector $c_i$ (from soft-attention BIBREF22) and a topic matrix $T_{topic}=[t_1,t_2,..,t_j]$ (where each $t_i$ is a column vector in the figure and we illustrate four topics) as the inputs to the topic-attention module, and then calculated a weighted summation of the topic vectors. The final sentence representation $r_i$ for the sentence $i$ was calculated as the following:",
"where $W_{topic}$ and $b _ { topic }$ are trainable parameters, $\\beta _{it}$ represents the topic-attention weights. The final sentence representation is denoted as $r_i$, and $[\\cdot ,\\cdot ]$ means concatenation. Here $s_i$ is the representation of the sentence which includes the topic information. The final sentence representation $r_i$ is the concatenation of $c_i$ and $s_i$, now we have both context-aware sentence representation and topic-related representation. Then we added a fully-connected layer, followed by a Softmax to do classification with cross-entropy loss function.",
"Topic Matrix To generate the topic matrix $T_{topic}$ as in Equation DISPLAY_FORM9, we propose a convolution-based method to generate topic vectors. We first pre-trained a topic model using the Latent Dirichlet Allocation (LDA) BIBREF25 model on MIMIC-III notes as used by the ELMo model. We set the number of topics to be 50 and ran for 30 iterations. Then we were able to get a list of top $k$ words (we set $k=100$) for each topic. To get the topic vector $t$ for a specific topic:",
"where $e_j$ (column vector) is the pre-trained Doc2vec word embedding for the top word $j$ from the current topic, and $Conv(\\cdot )$ indicates a convolutional layer; $maxpool(\\cdot )$ is a max pooling layer. We finally reshaped the output $t$ into a 1-dimension vector. Eventually we collected all topic vectors as the topic matrix $T_{topic}$ in Figure SECREF5.",
""
],
[
"We did three groups of experiments including two baseline groups and our proposed model. The first group was the TF-IDF features in Section SECREF3 for traditional models. The Naïve Bayesian classifier (NB) has the highest scores among all traditional methods. The second group of experiments used neural models described in Section SECREF3, where LSTM with self attention model (LSTM-self) has competitive results among this group, we choose this model as our base model. Notably, this is the first study that compares and evaluates LSTM-based models on the medical term abbreviation disambiguation task.",
"tableExperimental Results: we report macro F1 in all the experiments. figureTopic-attention Model",
"The last group contains the results of our proposed model with different settings. We used Topic Only setting on top of the base model, where we only added the topic-attention layer, and all the word embeddings were from our pre-trained Doc2vec model and were fine-tuned during back propagation. We could observe that compared with the base model, we have an improvement of 7.36% on accuracy and 9.69% on F1 score. Another model (ELMo Only) is to initialize the word embeddings of the base model using the pre-trained ELMo model, and here no topic information was added. Here we have higher scores than Topic Only, and the accuracy and F1 score increased by 9.87% and 12.26% respectively compared with the base model. We then conducted a combination of both(ELMo+Topic), where the word embeddings from the sentences were computed from the pre-trained ELMo model, and the topic representations were from the pre-trained Doc2vec model. We have a remarkable increase of 12.27% on the accuracy and 14.86% on F1 score.",
"To further compare our proposed topic-attention model and the base model, we report an average of area under the curve(AUC) score among all 30 terms: the base model has an average AUC of 0.7189, and our topic-attention model (ELMo+Topic) achieves an average AUC of 0.8196. We provide a case study in Appendix SECREF9. The results show that the model can benefit from ELMo, which considers contextual features, and the topic-attention module, which brings in topic information. We can conclude that under the few-shot learning setting, our proposed model can better capture the sentence features when only limited training samples are explored in a small-scale dataset.",
""
],
[
"In this paper, we propose a neural topic-attention model with few-shot learning for medical abbreviation disambiguation task. We also manually cleaned and collected training and testing data for this task, which we release to promote related research in NLP with clinical notes. In addition, we evaluated and compared a comprehensive set of baseline models, some of which had never been applied to the medical term abbreviation disambiguation task. Future work would be to adapt other models like BioBERT or BERT to our proposed topic-attention model. We will also extend the method into other clinical notes tasks such as drug name recognition and ICD-9 code auto-assigning BIBREF26. In addition, other LDA-based approach can be investigated."
],
[
"Figure FIGREF11 shows the histogram for the distribution of term-sense pair sample numbers. The X-axis gives the pair sample numbers and the Y-axis shows the counts. For example, the first bar shows that there are 43 term-sense pairs that have the sample number in the range of 0-50.",
"We also show histogram of class numbers for all terms in Figure FIGREF11. The Y-axis gives the counts while the X-axis gives the number of classes. For instance, the first bar means there are 12 terms contain 2 classes."
],
[
"Since the training dataset is unbalanced and relatively small, it is hard to split it further into training and testing. Existing work BIBREF6, BIBREF5 conducted fold validation on the datasets and we found there are extreme rare senses for which only one or two samples exist. Besides, we believe that it is better to evaluate on a balanced dataset to determine whether it performs equally well on all classes. While most of the works deal with unbalanced training and testing that which may lead to very high accuracy if there is a dominating class in both training and testing set, the model may have a poor performance in rare testing cases because only a few samples have been seen during training. To be fair to all the classes, a good performance on these rare cases is also required otherwise it may lead to a severe situation. In this work, we are very interested to see how the model works among all senses especially the rare ones. Also, we can prevent the model from trivially predicting the dominant class and achieving high accuracy. As a result, we decided to create a dataset with the same number of samples for each case in the test dataset.",
"Our testing dataset takes MIMIC-III BIBREF18 and PubMed as data sources. Here we are referring to all the notes data from MIMIC-III (NOTEEVENTS table ) and Case Reports articles from PubMed, as these contents are close to medical notes. To create the test set, we first followed the approach by BIBREF6 who applied an auto-generating method. Initially, we built a term-sense dictionary from the training dataset. Then we did matching for the sense words or phrases in the MIMIC-III notes dataset, and once there is a match, we replaced the words or phrases with the abbreviation term . We then asked two researchers with a medical background to check the matched samples manually with the following judgment: given this sentence and the abbreviation term and its sense, do you think the content is enough for you to guess the sense and whether this is the right sense? To estimate the agreement on the annotations, we selected a subset which contains 120 samples randomly and let the two annotators annotate individually. We got a Kappa score BIBREF27 of 0.96, which is considered as a near perfect agreement. We then distributed the work to the two annotators, and each of them labeled a half of the dataset, which means each sample was only labeled by a single annotator. For some rare term-sense pairs, we failed to find samples from MIMIC-III. The annotators then searched these senses via PubMed data source manually, aiming to find clinical notes-like sentences. They picked good sentences from these results as testing samples where the keywords exist and the content is informative. For those senses that are extremely rare, we let the annotators create sentences in the clinical note style as testing samples according to their experiences. Eventually, we have a balanced testing dataset, where each term-sense pair has around 15 samples for testing (on average, each pair has 14.56 samples and the median sample number is 15), and a comparison with training dataset is shown in Figure FIGREF11. Due to the difficulty for collecting the testing dataset, we decided to only collect for a random selection of 30 terms. On average, it took few hours to generate testing samples for each abbreviation term per researcher."
],
[
"We now select two representative terms AC, IM and plot their receiver operating characteristic(ROC) curves. The term has a relatively large number of classes and the second one has extremely unbalanced training samples. We show the details in Table TABREF16. We have 8 classes in term AC. Figure FIGREF15(a) illustrates the results of our best performed model and Figure FIGREF15(b) shows the results of the base model. The accuracy and F1 score have an improvement from 0.3898 and 0.2830 to 0.4915 and 0.4059 respectively. Regarding the rare senses (for example, class 0, 1, 4 and 6), we can observe an increase in the ROC areas. Class 6 has an obvious improvement from 0.75 to 1.00. Such improvements in the rare senses make a huge difference in the reported average accuracy and F1 score, since we have a nearly equal number of samples for each class in the testing data. Similarly, we show the plots for IM term in Figure FIGREF15(c) and FIGREF15(d). IM has only two classes, but they are very unbalanced in training set, as shown in Table TABREF16. The accuracy and F1 scores improved from 0.6667 and 0.6250 to 0.8667 and 0.8667 respectively. We observe improvements in the ROC areas for both classes. This observation further shows that our model is more sensitive to all the class samples compared to the base model, even for the terms that have only a few samples in the training set. Again, by plotting the ROC curves and comparing AUC areas, we show that our model, which applies ELMo and topic-attention, has a better representation ability under the setting of few-shot learning."
]
],
"section_name": [
"Introduction",
"Datasets",
"Baselines",
"Topic-attention Model",
"Evaluation",
"Conclusion",
"Dataset Details",
"Testing Dataset",
"Case Study"
]
} | {
"answers": [
{
"annotation_id": [
"5e562935cb84f11461fb1cb7a7eb12bbe8c81286",
"8dab99863f5097ee88b7eeeedce937abc4227043"
],
"answer": [
{
"evidence": [
"The last group contains the results of our proposed model with different settings. We used Topic Only setting on top of the base model, where we only added the topic-attention layer, and all the word embeddings were from our pre-trained Doc2vec model and were fine-tuned during back propagation. We could observe that compared with the base model, we have an improvement of 7.36% on accuracy and 9.69% on F1 score. Another model (ELMo Only) is to initialize the word embeddings of the base model using the pre-trained ELMo model, and here no topic information was added. Here we have higher scores than Topic Only, and the accuracy and F1 score increased by 9.87% and 12.26% respectively compared with the base model. We then conducted a combination of both(ELMo+Topic), where the word embeddings from the sentences were computed from the pre-trained ELMo model, and the topic representations were from the pre-trained Doc2vec model. We have a remarkable increase of 12.27% on the accuracy and 14.86% on F1 score."
],
"extractive_spans": [
"7.36% on accuracy and 9.69% on F1 score"
],
"free_form_answer": "",
"highlighted_evidence": [
". We could observe that compared with the base model, we have an improvement of 7.36% on accuracy and 9.69% on F1 score. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The last group contains the results of our proposed model with different settings. We used Topic Only setting on top of the base model, where we only added the topic-attention layer, and all the word embeddings were from our pre-trained Doc2vec model and were fine-tuned during back propagation. We could observe that compared with the base model, we have an improvement of 7.36% on accuracy and 9.69% on F1 score. Another model (ELMo Only) is to initialize the word embeddings of the base model using the pre-trained ELMo model, and here no topic information was added. Here we have higher scores than Topic Only, and the accuracy and F1 score increased by 9.87% and 12.26% respectively compared with the base model. We then conducted a combination of both(ELMo+Topic), where the word embeddings from the sentences were computed from the pre-trained ELMo model, and the topic representations were from the pre-trained Doc2vec model. We have a remarkable increase of 12.27% on the accuracy and 14.86% on F1 score.",
"FLOAT SELECTED: Figure 1: Topic-attention Model"
],
"extractive_spans": [],
"free_form_answer": "it has 0.024 improvement in accuracy comparing to ELMO Only and 0.006 improvement in F1 score comparing to ELMO Only too",
"highlighted_evidence": [
"The last group contains the results of our proposed model with different settings. We used Topic Only setting on top of the base model, where we only added the topic-attention layer, and all the word embeddings were from our pre-trained Doc2vec model and were fine-tuned during back propagation. We could observe that compared with the base model, we have an improvement of 7.36% on accuracy and 9.69% on F1 score. Another model (ELMo Only) is to initialize the word embeddings of the base model using the pre-trained ELMo model, and here no topic information was added. Here we have higher scores than Topic Only, and the accuracy and F1 score increased by 9.87% and 12.26% respectively compared with the base model. We then conducted a combination of both(ELMo+Topic), where the word embeddings from the sentences were computed from the pre-trained ELMo model, and the topic representations were from the pre-trained Doc2vec model. We have a remarkable increase of 12.27% on the accuracy and 14.86% on F1 score.",
"FLOAT SELECTED: Figure 1: Topic-attention Model"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"2793d7915581ea7d724be2f232f6edde0b26c1e6",
"70732c1077bdec8fe9483e34ecfdaae1f95b843d"
],
"answer": [
{
"evidence": [
"TF-IDF: We applied TF-IDF features as inputs to four classifiers: support vector machine classifier (SVM), logistic regression classifier (LR), Naive Bayes classifier (NB) and random forest (RF); CNN: We followed the same architecture as BIBREF9; LSTM: We applied an LSTM model BIBREF20 to classify the sentences with pre-trained word embeddings;LSTM-soft: We then added a soft-attention BIBREF21 layer on top of the LSTM model where we computed soft alignment scores over each of the hidden states; LSTM-self: We applied a self-attention layer BIBREF22 to LSTM model. We denote the content vector as $c_i$ for each sentence $i$, as the input to the last classification layer."
],
"extractive_spans": [
"support vector machine classifier (SVM)",
"logistic regression classifier (LR)",
"Naive Bayes classifier (NB)",
"random forest (RF)",
"CNN",
"LSTM ",
"LSTM-soft",
"LSTM-self"
],
"free_form_answer": "",
"highlighted_evidence": [
"TF-IDF: We applied TF-IDF features as inputs to four classifiers: support vector machine classifier (SVM), logistic regression classifier (LR), Naive Bayes classifier (NB) and random forest (RF); CNN: We followed the same architecture as BIBREF9; LSTM: We applied an LSTM model BIBREF20 to classify the sentences with pre-trained word embeddings;LSTM-soft: We then added a soft-attention BIBREF21 layer on top of the LSTM model where we computed soft alignment scores over each of the hidden states; LSTM-self: We applied a self-attention layer BIBREF22 to LSTM model. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We conducted a comprehensive comparison with the baseline models, and some of them were never investigated for the abbreviation disambiguation task. We applied traditional features by simply taking the TF-IDF features as the inputs into the classic classifiers. Deep features are also considered: a Doc2vec model BIBREF19 was pre-trained using Gensim and these word embeddings were applied to initialize deep models and fine-tuned.",
"TF-IDF: We applied TF-IDF features as inputs to four classifiers: support vector machine classifier (SVM), logistic regression classifier (LR), Naive Bayes classifier (NB) and random forest (RF); CNN: We followed the same architecture as BIBREF9; LSTM: We applied an LSTM model BIBREF20 to classify the sentences with pre-trained word embeddings;LSTM-soft: We then added a soft-attention BIBREF21 layer on top of the LSTM model where we computed soft alignment scores over each of the hidden states; LSTM-self: We applied a self-attention layer BIBREF22 to LSTM model. We denote the content vector as $c_i$ for each sentence $i$, as the input to the last classification layer."
],
"extractive_spans": [
"support vector machine classifier (SVM)",
"logistic regression classifier (LR)",
"Naive Bayes classifier (NB)",
"random forest (RF)",
"CNN",
"LSTM ",
"LSTM-soft",
"LSTM-self"
],
"free_form_answer": "",
"highlighted_evidence": [
"We conducted a comprehensive comparison with the baseline models, and some of them were never investigated for the abbreviation disambiguation task. We applied traditional features by simply taking the TF-IDF features as the inputs into the classic classifiers. Deep features are also considered: a Doc2vec model BIBREF19 was pre-trained using Gensim and these word embeddings were applied to initialize deep models and fine-tuned.\n\nTF-IDF: We applied TF-IDF features as inputs to four classifiers: support vector machine classifier (SVM), logistic regression classifier (LR), Naive Bayes classifier (NB) and random forest (RF); CNN: We followed the same architecture as BIBREF9; LSTM: We applied an LSTM model BIBREF20 to classify the sentences with pre-trained word embeddings;LSTM-soft: We then added a soft-attention BIBREF21 layer on top of the LSTM model where we computed soft alignment scores over each of the hidden states; LSTM-self: We applied a self-attention layer BIBREF22 to LSTM model. We denote the content vector as $c_i$ for each sentence $i$, as the input to the last classification layer."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"c3830d19ddf1d28c754e8cf952e2661c651b6c08"
],
"answer": [
{
"evidence": [
"Our testing dataset takes MIMIC-III BIBREF18 and PubMed as data sources. Here we are referring to all the notes data from MIMIC-III (NOTEEVENTS table ) and Case Reports articles from PubMed, as these contents are close to medical notes. To create the test set, we first followed the approach by BIBREF6 who applied an auto-generating method. Initially, we built a term-sense dictionary from the training dataset. Then we did matching for the sense words or phrases in the MIMIC-III notes dataset, and once there is a match, we replaced the words or phrases with the abbreviation term . We then asked two researchers with a medical background to check the matched samples manually with the following judgment: given this sentence and the abbreviation term and its sense, do you think the content is enough for you to guess the sense and whether this is the right sense? To estimate the agreement on the annotations, we selected a subset which contains 120 samples randomly and let the two annotators annotate individually. We got a Kappa score BIBREF27 of 0.96, which is considered as a near perfect agreement. We then distributed the work to the two annotators, and each of them labeled a half of the dataset, which means each sample was only labeled by a single annotator. For some rare term-sense pairs, we failed to find samples from MIMIC-III. The annotators then searched these senses via PubMed data source manually, aiming to find clinical notes-like sentences. They picked good sentences from these results as testing samples where the keywords exist and the content is informative. For those senses that are extremely rare, we let the annotators create sentences in the clinical note style as testing samples according to their experiences. Eventually, we have a balanced testing dataset, where each term-sense pair has around 15 samples for testing (on average, each pair has 14.56 samples and the median sample number is 15), and a comparison with training dataset is shown in Figure FIGREF11. Due to the difficulty for collecting the testing dataset, we decided to only collect for a random selection of 30 terms. On average, it took few hours to generate testing samples for each abbreviation term per researcher."
],
"extractive_spans": [],
"free_form_answer": "30 terms, each term-sanse pair has around 15 samples for testing",
"highlighted_evidence": [
"Eventually, we have a balanced testing dataset, where each term-sense pair has around 15 samples for testing (on average, each pair has 14.56 samples and the median sample number is 15), and a comparison with training dataset is shown in Figure FIGREF11. Due to the difficulty for collecting the testing dataset, we decided to only collect for a random selection of 30 terms. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"6b3d8d7b0a97363f72bed0f1df94efbe9ebb71f2"
],
"answer": [
{
"evidence": [
"Training Dataset UM Inventory BIBREF5 is a public dataset created by researchers from the University of Minnesota, containing about 37,500 training samples with 75 abbreviation terms. Existing work reports abbreviation disambiguation results on 50 abbreviation terms BIBREF6, BIBREF5, BIBREF17. However, after carefully reviewing this dataset, we found that it contains many samples where medical professionals disagree: wrong samples and uncategorized samples. Due to these mistakes and flaws of this dataset, we removed the erroneous samples and eventually selected 30 abbreviation terms as our training dataset that can be made public. Among all the abbreviation terms, we have 11,466 samples, and 93 term-sense pairs in total (on average 123.3 samples/term-sense pair and 382.2 samples/term). Some term-sense pairs are very popular with a larger number of training samples but some are not (typically less than 5), we call them rare-sense cases . More details can be found in Appendix SECREF7."
],
"extractive_spans": [
" UM Inventory "
],
"free_form_answer": "",
"highlighted_evidence": [
"Training Dataset UM Inventory BIBREF5 is a public dataset created by researchers from the University of Minnesota, containing about 37,500 training samples with 75 abbreviation terms. Existing work reports abbreviation disambiguation results on 50 abbreviation terms BIBREF6, BIBREF5, BIBREF17. However, after carefully reviewing this dataset, we found that it contains many samples where medical professionals disagree: wrong samples and uncategorized samples. Due to these mistakes and flaws of this dataset, we removed the erroneous samples and eventually selected 30 abbreviation terms as our training dataset that can be made public."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
],
"nlp_background": [
"zero",
"zero",
"zero",
"zero"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"How big are improvements of small-scale unbalanced datasets when sentence representation is enhanced with topic information?",
"To what baseline models is proposed model compared?",
"How big is dataset for testing?",
"What existing dataset is re-examined and corrected for training?"
],
"question_id": [
"a712718e6596ba946f29a99838d82f95b9ebb1ce",
"3116453e35352a3a90ee5b12246dc7f2e60cfc59",
"dfca00be3284cc555a6a4eac4831471fb1f5875b",
"a9a532399237b514c1227f2d6be8601474e669be"
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Table 1: Examples showing the abbreviation term MR: MR from Note 1 is the abbreviation of ‘mental retardation’. One could easily misinterpret it as ‘mitral regurgitation’which may lead to further imaging studies with subsequent delaying or avoidance of non-urgent interventions due to higher risk of complications. In Note 2, MR means ‘mitral regurgitation’[6].",
"Figure 1: Topic-attention Model",
"Figure 2: Comparison of sample numbers. X-axis: all the 93 term-sense pairs (due to space limits, we did not list the pairs in the plot). Y-axis: how many samples in each pair.",
"Figure 3: The term-sense pair count distribution histogram among 93 pairs.",
"Table 3: Training and Testing sample distribution among classes in category AC and IM.",
"Figure 5: ROC curve plots on two selected terms: AC and IM. We compare results of the base model and our best model. Here we plot the ROC curve for each class."
],
"file": [
"2-Table1-1.png",
"4-Figure1-1.png",
"7-Figure2-1.png",
"7-Figure3-1.png",
"8-Table3-1.png",
"9-Figure5-1.png"
]
} | [
"How big are improvements of small-scale unbalanced datasets when sentence representation is enhanced with topic information?",
"How big is dataset for testing?"
] | [
[
"1910.14076-4-Figure1-1.png",
"1910.14076-Evaluation-2"
],
[
"1910.14076-Testing Dataset-1"
]
] | [
"it has 0.024 improvement in accuracy comparing to ELMO Only and 0.006 improvement in F1 score comparing to ELMO Only too",
"30 terms, each term-sanse pair has around 15 samples for testing"
] | 245 |